Next Article in Journal / Special Issue
Recursive Feature Elimination for Improving Learning Points on Hand-Sign Recognition
Previous Article in Journal
COVID-Related Misinformation Migration to BitChute and Odysee
Previous Article in Special Issue
Generating Indicators of Disruptive Innovation Using Big Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI

1
Symbiosis Institute of Technology (SIT), Symbiosis International (Deemed University) (SIU), Lavale, Pune 412115, India
2
School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052, Australia
3
School of NUOVOS, Ajeenkya DY Patil University, Pune 412105, India
4
Swiss School of Business and Management, 1213 Geneva, Switzerland
5
Symbiosis Centre for Applied Artificial Intelligence (SCAAI), Symbiosis International (Deemed University) (SIU), Pune 412115, India
*
Authors to whom correspondence should be addressed.
Future Internet 2022, 14(12), 351; https://doi.org/10.3390/fi14120351
Submission received: 23 September 2022 / Revised: 20 November 2022 / Accepted: 21 November 2022 / Published: 25 November 2022
(This article belongs to the Special Issue Trends of Data Science and Knowledge Discovery)

Abstract

:
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.

1. Introduction

Computer-aided diagnosis is a big supportive help to clinicians, provided the biomedical images are high-quality, such as magnetic resonance imaging (MRI) scans. MRI is a preferred imaging technique compared to computerized tomography (CT) and positron emission tomography (PET) as it gives better soft tissue contrast and is free of any ionizing radiation [1]. In MRI, hydrogen ions in water molecules are reoriented by applying a high magnetic field. As a result, images with details of anatomical structures are available for analysis. The quality of MRI is affected by three factors during the scanning process: the density of protons, and the T1 and T2 relaxation times, where the timespan of T1 and T2 vary according to the hardness of the body tissue [2]. MRI scans are vital for brain navigation, detailed characteristics, and other cranial structures [3].
The deep learning (DL) framework allows for the automated analysis of inputs against the handcrafted feature analysis [4]. DL methods help address common medical imaging problems, such as increasing accuracy and precision, and enhancing the speed of image analysis and image contrast [5]. However, DL models need large datasets to train the model effectively, which are generally not adequately available in the biomedical field [6]. A generative adversarial network (GAN) is a prospective solution to this problem as it can generate training samples duplicating the distribution of the real dataset. In medical imaging, these GAN-synthesized image samples have the forefront use of dataset enhancement; the images can be further used for applications such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. GAN was proposed by Ian Goodfellow et al. [7] in 2014 to generate synthetic images duplicating the ground truth images. A GAN is a combination of two networks, the generator and the discriminator, which are trained at the same time with images. The generator produces fake images duplicating the original images, which are then sent to the discriminator network to be compared and evaluated against the authentic images. The discriminator is trained on both types of inputs and learns the features of the original input. The difference between the two inputs is called discriminator loss, which is conveyed to the generator network to modify the parameters for improvement. The situation is similar to a min-max game where a strategy minimizes the network’s maximum possible loss (see Figure 1). This work presents the latest GAN trend for MRI brain image analysis.
Section 1 of the SLR paper consists of an introduction to the topic. Section 2 presents the methodology, including the review of prior research in the related field, the motivation behind this SLR, research questions, search strategy, and inclusion-exclusion criterion. The results of the SLR guided by research questions are reported in Section 3. The discussion and conclusion are presented in Section 4 and Section 5, respectively.

2. Materials and Methods

The authors have adopted an unbiased review process and targeted approach while writing this SLR. They have followed a systematic sequence of steps for the review: defining the research question (RQ) based on predefined goals, identifying the targeted journals and conferences, finding the already available reviews, forming the relevant search strategy, identifying papers based on the selection criteria, comprehending the content, extracting the data, summarizing the results and constructing the final document.

2.1. Prior Research

The closest review paper found on this topic is by Hazrat Ali et al. [8], where they presented a scoping review of GAN’s applications in brain MRI, publicly available brain MRI datasets, evaluation procedures, and focal brain diseases. However, this paper does not give the details of methods used in GAN’s applications, and evaluation metrics are superficially mentioned. There is no description of loss functions and preprocessing software. Creswell et al. [9] described the training part and the mode collapse issue related to the stability of GAN, and it is not limited to MRI and brain images. Table 1 gives the summary of previous research work.

2.2. Motivation

Since its invention in 2014, GAN has been a preferred choice in research. The advent of deep learning has highlighted its dependency on large training datasets, which are usually rare in the biomedical field. The number of peer-reviewed papers on GAN with MRI-only workflow for brain imaging has increased considerably. For example, the number of research papers in this field published in Web of Science (WoS) and Scopus databases show an increasing trend (Figure 2). The study of related work in the literature highlights the following two limitations:
  • There is no existing SLR paper that describes the applications of GAN-synthesized brain MRI.
  • There is no existing SLR paper that mentions the types of GAN loss functions.
This SLR is going to contribute to the literature in the following manner:
  • The paper presents a clear categorical division of brain MRI applications of GAN-synthesized images as well as the details of the technique.
  • The paper presents a concise account of distinct loss functions used in GAN training.
  • The paper identifies software to preprocess brain MRI.
  • The paper compares the various evaluation metrics available for the performance evaluation of synthetic images.

2.3. Research Questions

Authors have performed a systematic review of available literature in two databases following the directions of the preferred reporting items for systematic reviews and meta-analyses (PRISMA) methodology [12]. The PRISMA flowchart for the retrieval and selection of papers is shown in Figure 3. Table 2 indexes the Research Questions (RQ) considered for this SLR.

2.4. Search Strategy

The applications of GAN-synthesized images for brain MRI were explored by querying the Web of Science and Scopus databases. The papers containing the following terms in the title, abstract, and keywords were identified: “(MR Imaging) OR (MRI) OR (magnetic resonance imaging)” AND “(Brain Imaging) OR (Brain Images)” AND “GAN OR Generative Adversarial Network.” A comprehensive screening of the papers published in the Web of Science (January 2017 to August 2022) and Scopus databases (January 2017 to August 2022) resulted in 210 and 389 peer-reviewed articles, respectively. The SLR quality is maintained by only reviewing results from quartile 1 (Q1) or quartile 2 (Q2) journals and recent conference papers. Q1 shows that a journal’s impact factor lies in the top 25% of journal impact factor (JIF) distribution for a category, and Q2 indicates the next 25% slot of JIF [13]. Therefore, we are hopeful that the proposed search terms make up most of GAN’s work. Table 3 gives details of the search queries applied for both databases.

2.5. Inclusion and Exclusion Criteria

A well-defined selection criterion lays out the guidelines for the inclusion or exclusion of a particular research paper in the systematic review. The selection criterion should be impartial toward the authors’ names and institutions. This study includes full-length articles involving GAN for brain MRI. The focus is given to only Q1 or Q2-rated peer-reviewed journals and recent conferences to have high-quality search results. A search for full-text availability reduced the volume of papers. The studies in a language other than English are excluded to obtain a clear perception. We have considered only the MRI modality as the source image for the review and excluded other modalities such as CT, PET, or X-ray. Reports based on the non-human brain are excluded. Review papers, editorials, and book chapters are not included. So, the final count of study papers is n = 145 after applying the inclusion-exclusion criterion as given in Table 4.

3. Results

3.1. Applications of GAN-Synthesized Images for Brain MRI (RQ 1)

The size of the dataset is usually limited in the case of medical imaging. Therefore, synthetic images are of great importance in training the networks. GAN-synthesized images are considered the primary remedy to deep learning implementations’ scarce model training data. Data augmentation is the apparent application of GAN-synthesized brain MRI, yet there are other imaging purposes, as shown in Figure 4. The following subsections describe how the synthetic brain images generated by various GAN models are used for different brain image purposes.

3.1.1. Image Translation

Deliberation of more than one imaging modality is recommended in finding the complete picture of the abnormalities. The main task in image-to-image translation is a reliable mapping from the source image to the synthetic image [14]. The process involves good adoption of the loss function to inculcate this mapping into a high-quality translated image [15]. However, acquiring all modalities is impossible due to certain limitations, e.g., scanning time, cost, radiation dose, and patients’ age. GAN is extensively used to generate CT and PET images from source brain MRIs.
A.
MRI-to-CT Translation:
In current radiotherapy, both CT and MRI are used to diagnose brain abnormality better. On the one hand, CT images give an electron density score required for treatment planning; on the other hand, MRI is equally contributory with its superior contrast at soft brain tissues. MRI-only treatment planning can reduce the registration misalignment between CT and MRI scans, curtail the imaging cost, improve the radiotherapy accuracy, and lower the patient’s vulnerability towards ionization radiation compared to the CT process possible with GAN. The GAN model can work on paired or unpaired type training images as inputs. The unpaired dataset is readily available but is hard to deal with as the mapping between input and output is unknown. Although challenging to access, the paired datasets present easy GAN development.
The authors in [16] have generated synthetic CT images by using mutual information (MI) as the loss function to avoid the issue of misalignment between MRI and CT images. The study of [17] is the modification of [16], where the ConditionalGAN (CGAN) model checks the dosimetric accuracy of synthetic CT (SCT) images of patients with brain tumors with the aim of their use in MRI-only treatment planning for proton therapy. Along with MI, binary cross entropy is used as the activation/loss function in the discriminator. The study of [18] compares the similarity between SCT and the original CT, where a CGAN based on the pix2pix architecture produces an SCT. Radiation dose calculation is a significant difficulty in MR-only workflow as it is hard to find electron density information from only MRI scans. The study [19] discusses SCT generation and evaluates dosimetry accuracy.
The MedGAN in [20] uses the fusion of non-adversarial losses from recent image style transfer techniques to capture the desired target modality’s high- and low-frequency details. MedGAN works at the image level in an end-to-end manner giving better performance compared to patch-wise training, which suffers from limited modeling capacity. CasNet, new generator architecture, uses encoder-decoder pairs to raise the acuteness of the resultant images by gradual refinement. The MR-based attenuation correction (MRAC) process is extensively used in PET/MR systems for photon attenuation correction. In the atlas-based MRAC method, CGAN generates photon attenuation maps from SCT. Skip connections of U-Net and GAN loss restore edge information in images [21]. The dosimetric and image-guided radiation therapy (IGRT) method of SCT generation is discussed in [22]. GAN with a ResNet generator and a CNN discriminator creates SCT images from T1-weighted postgadolinium MRI. In [23], a spatial attention-guided generative adversarial network (Attention-GAN) minimizes the spatial difference in SCTs that can deal with the atypical anatomies and outliers in a better way. The framework projects the regions of interest (ROI), and the transformation network performs the domain change. The term attention is the addition of the absolute values in different layers of the discriminator across the channel dimension. CycleGAN is the second most commonly used GAN model in image translation applications. The cycleGAN can work on unpaired data but may introduce inconsistent anatomical features in the generated images. The unsupervised attention guided GAN (UAGGAN) model can work with both paired and unpaired images and be used for bidirectional MR-CT image synthesis. First, the supervised pre-training fine-tunes the network parameters; then, the unsupervised training improves the medical image translation. The combination of WGAN adversarial loss with content loss and L1 assures global consistency in the output image. The UAGGAN achieves satisfactory performance by producing attention masks [24]. In [25], cycleGAN with dense blocks performs two transformation mappings (MRI to CT and CT to MRI) simultaneously. A multi-scale patch-based GAN performs unpaired domain translation and generates 3D medical images of a high resolution. The approach has a low memory requirement where a low-resolution version is generated and later converted into a high-resolution version by patches of constant sizes [26]. Three-dimensional cycleGAN uses inverse transformation and inverse supervision to learn the mapping between the MRI and CT image pairs for proton treatment planning of base-of-skull (BoS) tumors. The dense blocks-based generator explores image patches for textural and structural features [27]. Attenuation correction (AC) needed for a PET image is accomplished by a 3D cycle-GAN, where a 3D U-net generator produces continuous AC maps from Dixon MR Images without MR and CT image registration. The downsampling and upsampling layers in 3D U-net reduce the memory requirements [28]. StarGAN performs image translation among more than one pair of classes. In [29], counterfactual activation generator (CAG) implement image transformation for seven classes. This setting extracts task-sensitive features from brain activations by equating ground truth and real and synthetic images. In [30], high dimensional input maps are translated to high dimensional output maps with the help of Pix2Pix-cGANs to colorize the tumor region in Intracranial tumor MRI images.
B.
MRI-to-PET Translation:
MRI scans also find applications in synthetic PET scan generation similar to SCT generation. A good-quality PET image requires a full-dose tracer, but the potential health hazards posed by radioactive exposures raise concerns about PET images’ use. A 3D auto-context-based locality adaptive multimodality GAN (LAGAN) generates the superior FDG PET using the same kernel for every input modality. The locality-adaptive fusion network produces a fused image by learning convolutional kernels at different image locations. Then, these fused images are used for generator training keeping the number of parameters low while increasing the number of modalities. Contrary to the multimodality cases where convolution is performed globally, the method in [31] concentrates on locality adaptive convolution. In PET imaging, if the tracer dose is lowered, considering its negative effect on the patient, unnecessary noises and artifacts compromise the quality of the resultant image. Two networks, convolutional auto-encoder and GAN generate adaptive PET templates with the help of a C-PIB PET scan and a T1-weighted MRI sequence. These synthetic PET images are used to spatialize amyloid PET scans during Alzheimer’s disease estimation [32]. In Multiple Sclerosis, demyelination occurs in the brain’s white matter and the spinal cord. Sketcher-Refiner GAN predicts the PET-derived myelin content map from multimodal MRI by sketching the anatomy and physiological details and then producing the myelin content map. The model is an extension of CGAN with a 3D U-Net generator working on four MRI modalities as inputs [33]. A task-induced pyramid and attention GAN(TPAGAN) integrates pyramid convolution and attention module to create the absent PET image with corresponding MR. Three sub-networks perform the whole task: pyramid and attention generator, standard discriminator, and task-induced discriminator [34].
Bidirectional mapping GAN (BMGAN), a 3D end-to-end network, in [35] makes use of image contexts and latent vectors to generate PET from brain MRI. The model employs the generator, the discriminator, and the encoder to fuse the semantic features of PET scans with the high-dimensional latent space. The forward mapping step during the model training allows the PET images to become encoded into the latent space. The backward mapping step enables the generator to produce PET images from the MRI and sampled latent vectors. Finally, the encoder reconstructs the input latent vector from the synthetic PET scan. A hybrid GAN (HGAN) employs a hybrid loss function to produce absent PET images taking a clue from corresponding MRI scans. A spatially-constrained Fisher representation (SCFR) network is used to derive statistical details from multimodal neuroimaging data [36]. In [37], cycleGAN generates fake FDG-PET from T1-weighted MRI in two manners; one is from three adjacent transverse slices, and the other is from 3D mini patches. Two CNNs, ScaleNet and HighRes3DNet, and one CGAN were trained to map structural MR to nonspecific (NS) PET images [38]. Table 5 presents the summary of image translation.

3.1.2. Image Registration

The image registration technique processes the images to fuse them to extract more information. In some cases, moving images are transformed into fixed reference images. The purpose behind the image registration can be motion correction, pose estimation, spatial normalization, atlas-based segmentation, and aligning images from multiple subjects [39].
A deep pose estimation network speedily enables slice-to-volume and volume-to-volume registration of brain anatomy. Transformation variables are adjusted by multi-scale registrations that initiate the iterative optimization process. CGAN learns region-based distortions of multimodal registration from T1- to T2-weighted images. Regression-type CNN predicts the angle-axis depiction of 3D movement. CycleGAN can be used for cases where paired images are unavailable [40]. The cycleGAN-based model performs symmetric image registration of unimodal/multimodal images where an inverse consistency performs bi-directional spatial transformations between images. SymReg-GAN is the extension of cycleGAN that performs semi-supervised learning with labeled image pairs and neglects unlabeled pairs. The spatial transformer performs the differentiable operation and warps the moving image using an estimated transformation [41].
The geometric transformation estimates the association of physically corresponding points within a pair of images’ fields-of-view (FOVs). This transformation can sometimes lead to asymmetric and biased mapping where the “fixed” image is unaffected and the “moving” image experiences an interpolation smoothing the image simultaneously. Most of the current registration methods are focused on asymmetric directional image registration. Multi-atlas-based brain image parcellation (MAP) is a technique in which numerous brain atlases are registered to a new reference map. Manually labeled brain regions are passed on and combined with the final parcellation result. The generator of multi-atlas-guided fully convolutional network (FCN) with multi-level feature skip connection (MA-FCN-SC) structure produces input brain image with parcellation [42]. In the multi-atlas guided deep learning parcellation (DLP) technique, attributes of the most suitable map led to the parcellation process of the target brain map. FCN with squeeze-and-excitation (SE) sections GAN (FCN-SE-GAN) performs better than the MAP technique since this method avoids nonlinear registration. Improvement in the result is caused by three factors: brain atlases, automatic brain atlas selection, and GAN [43]. An unsupervised adversarial similarity network performs registration without ground-truth deformation images and specific similarity metrics for the network training. Both mono-modal and multimodal 3D image registration can apply to the network. A spatial transformation layer connects the registration and the discrimination networks [44]. Image registration is crucial for brain atlas building, but it also helps monitor the continuous advancements in multiple patient visits. A specific dataset can train the deep networks for applications where sufficient ground truth data is unavailable for training. However, a network trained to register a pair of chest X-ray images cannot produce the same quality output on a couple of brain MRI scans. In such cases, the network needs to be retrained. GAN-based registration of an image pair and segmentation and transfer learning is achieved. Other image pairs can easily use them without retraining. Two convolutional auto-encoders are used for encoding and decoding [45]. Table 6 presents the summary of GAN-synthesized images used for registration.

3.1.3. Image Super-Resolution

The super-resolution (SR) technique converts low-resolution images to high-resolution images without compromising the scanner settings and imaging sequences. These SR methods achieve higher SNR and reduced blurriness at edges compared to conventional interpolating methods [46]. In the super-resolution process, several low-resolution images taken from slightly different viewpoints are used to predict the high-resolution version. Sufficient prior information allows better prediction parameters than actual measurements [47].
The single image super-resolution (SISR) method is vital for medical images as it helps diagnose the disease. A lesion-focused SR (LFSR) method is developed that produces seemingly more realistic SR images. In the LFSR method, a multi-scan GAN (MSGAN) produces multi-scale SR and higher-dimensional images from the lower-dimensional version [48]. Training the GAN becomes complicated when the inputs are high-resolution and high-dimensional images; therefore, information learning is divided among several GANs. First, a shaping network in unconditional super-resolution GAN (SR-GAN) is employed to pick up the three-dimensional discrepancy in the shape of adult brains. Then, a texture network is applied in conditional pix2pix GAN to improve image slices with realistic local contrast patterns. Finally, the shape network is trained with the WGAN with Gradient Penalty (WGAN-GP) method. It is an unconditional generator that grasps the brain’s three-dimensional spatial distortions [49].
In [50], authors have used the progressive upscaling method to generate true colors. The multi-path architecture of the SRGAN model takes out shallow features on multiple scales where the filter sizes are three, five, and seven instead of a single scale. The upscaled features are matched back to a high-resolution image through a reconstruction convolutional layer. Enhanced sSRGAN (ESRGAN) implements super-resolution 2D MRI slice creation where slices from three different latitudes are selected for the 2D super-resolution and later reconstructed into a three-dimensional form. The first half of the three-dimensional matrices is reconstructed from high-resolution slices with good texture features. Then, the three-dimensional slices are repaired through interpolation to obtain new brain MRI data. The VGG16 [51] is employed before activation to restore the features, solve over-brightness in SRGAN, and improve performance [52]. The work of [53] is also based on ESR-GAN, where two neural networks complete the super-resolution task. The first network, receiving field block (RFB)-ESRGAN, selects half the number of slices for super-resolution reconstruction and MRI rebuilding and upholds high-frequency information. The second network, the noise-based network (NESRGAN), completes the second super-resolution reconstruction task with noise and interpolated sampling, repairing the reconstructed MRI’s absent values. The linear interpolation technique is involved in feature extraction and up-sampling. The neonatal brain MRI is a low anisotropic resolution scan. To increase the resolution, medical images SR using GAN (MedSRGAN) uses residual whole map attention network (RWMAN) first to interpolate and then segment [54].
Existing super-resolution methods are very scale-specific and cannot be generalized over the magnification scale. The medical image arbitrary-scale super-resolution (MIASSR) method coupled with GAN executes the super-resolution for modalities such as cardiac MR scans and chest CTs by exercising transfer learning [55]. Similarly, in [56], simultaneous super-resolution and segmentation are performed for 3D neonatal brain MRI on the simulated version of low-resolution. The learned model then upgrades and segments the real clinical low-resolution images. In 2D MR acquisition, the pulse sequence decides the slice thickness. The exact characteristics of signal excitation are not explicitly known; giving less information about slice selection profiles (SSP) creates insufficient training data. This problem can be solved by predicting a relative SSP from the difference between in- and through-plane image patches. The thicker slices and larger slice distance are maintained to decrease scan timing and achieve a high signal-to-noise ratio, resulting in a lower through-plane resolution than in-plane resolution. The GAN-based method focuses on improving the resolution of the through-plane slices where training data is a degraded version of in-plane slices to match the through-plane resolution [57]. A high signal-to-noise ratio in an MRI scan can assist in correctly detecting Alzheimer’s disease. Utilizing the GAN-based SR technique, image quality equivalent to a 3-T scanner can be achieved without altering scanner parameters, even if the scans are obtained through 1.5-T scanners. The generator creates a transformation mask, and the discriminator differentiates the synthetic 3-T image from the original 3-T image [58].
In [59], fine perceptive generative adversarial networks (FPGANs) adopt the divide-and-conquer scheme to extract the low-frequency and high-frequency features of MR images separately and parallelly. The model first decomposes an MR image into low-frequency global approximation and high-frequency anatomical texture subbands in the wavelet domain. The subband GAN simultaneously performs a super-resolving process on each subband image, resulting in finer anatomical structure recovery. Study [60] uses end-to-end GAN architecture to produce high-resolution 3D images. The training is performed in a hierarchical manner producing a low-resolution scan and a randomly selected part of the high-resolution scan simultaneously. It provides two-fold benefits: first, the memory requirement during training of high-resolution images is divided into small parts. Next, high-resolution volumes are converted to a single low-resolution image keeping anatomical consistency intact. Spatial resolution images are produced by direct Fourier encoding from three short-duration scans [61]. Table 7 presents the summary of GAN-Synthesized images used for super-resolution.

3.1.4. Contrast Enhancement

In MR imaging, different sequences (or modalities) can be acquired that provide valuable and distinct knowledge about brain disease, for example, T1-weighted, T2-weighted, proton density imaging, diffusion-weighted imaging, diffusion tensor imaging, and functional MRI (fMRI) [62,63,64]. The imaging process can highlight only one of them. Multiple scan acquisition processes and long scan times for capturing all contrasts can give rise to the cost and discomfort of the patient. The enhancement process that generates different contrasts from the same MRI sequences is helpful for overcoming the data heterogeneity [65]. The contrast enhancement methods can be divided into three categories, as shown in Figure 5.
A.
Modality Translation:
In the MRI acquisition, discrete imaging protocols result in different intensity distributions for a single imaging object. The recent data-driven techniques acquire MR images from multi-center and multi-device with multi-parameters. This fact gives rise to the need for universal and uniform datasets. All studies discussed in this section generate one or more MRI modalities from one or more available modalities. The redundant information of the multi-echo saturation recovery sequence with different echo time (TE) and inversion time (TI) generates multiple contrasts generally used as a reference to find a mutual-correction effect. In [66], the multi-task deep learning model (MTDL) synthesizes six 2D multi-contrast sequences: axial T1-weighted, T2-weighted, T1 and T2-FLAIR, short Tau inversion recovery (STIR), and proton density (PD) simultaneously. The registration-based synthesis approach is based on creating a single atlas responsible for loss in structural information of dummy multi-contrast images due to a nonlinear intensity transformation. Whereas the intensity-based method not depending on fixed geometric relationships among different anatomies and gives better synthesis results. PGAN is used for the generation when multi-contrast images are spatially registered, and CGAN is used when unregistered [67]. MultiModal GAN (MM-GAN), a variant of Pix2Pix architecture, synthesizes the absent modality by merging the details from all available modalities [68]. MI-GAN, an amendment over MM-GAN, is a multi-input generative model that creates the missing modalities. Commonly acquired modalities are T1-weighted (T1), T1-contrast-enhanced (T1c), T2-weighted (T2), and T2-fluid-attenuant inversion recovery (FLAIR). The absent one is created from the other three available modalities [69]. The limitation of earlier methods of cross-modality generation is that they are not extendible to multiple modalities. The total M (M − 1) number of different generators will need to be trained to learn all sorts of mapping among M modalities. In addition to this, each translator can only use two out of M modalities simultaneously. The modality-agnostic encoder of a cycle-constrained CGAN draws out modality-invariant anatomical features and generates the desired modality with a conditioned decoder. A conditional autoencoder and discriminator can complete all pair-wise translations. Once the feedforward processing on any modality label is over, the same autoencoder is reused to make a condition on the modality label of the original input for the cycle reconstruction [70]. The usual cross-modality image translation methods involving GAN models are based on paired data. Modular cycleGAN (MCGAN) performs unsupervised multimodal MRI translation from a single modality and retains the lesion information. The architecture includes encoders, decoders, and discriminators. MCGAN uses the combination of deconvolution and resize upsampling methods that avoid the checkerboard artifacts in the generated images [71]. Edges in a medical image contain principal details of anatomy such as tissue, organ, and abrasion details. However, the images produced by a normal GAN have blurred boundaries. A flexible, gradient-prior integrated, encoder-decoder-based adversarial learning network (FGEAN) is an end-to-end framework of multiple inputs and multiple outputs that uses gradient-prior to retain tissue composition type of high-frequency details [72]. Edge-aware GAN (Ea-GAN) is a 3D method that extracts voxel-wise intensity and image structure information to overcome slice discontinuity and blurriness problems. The Sobel operator is used to extract the edge details. The Sobel filter assigns higher weights to its nearer neighbors and lower weights to the farther neighbors, which is impossible with direct image gradient application [73].
CycleGAN-based unified forward generative adversarial network transforms any T2-FLAIR images in different groups into a single reference one [74]. In [75], WGAN generates multi-sequence brain MR images with the advantage of stable learning. The Earth Mover (EM) distance (a.k.a. the Wasserstein-1 metrics) of WGAN allows minor mode collapse. In [76], sample-adaptive GAN imitates each sample by learning its correlation with its neighboring training samples and applying the target-modality features as auxiliary information for synthesis. The self-attention GAN (SAGAN) of [77] attends to various organ anatomical structures via attention maps which showcase spatial semantic details with the help of an attention module. In [78], the GAN framework learns shared content encoding and domain-specific style encoding across multiple domains. CGAN in image modality translation (IMT) network employs nonlinear atlas-based registration to register a moving image to the fixed image. The PatchGAN classifier with no constraints on each patch’s size acts as the discriminator generating acute results with lesser parameters and a low running time [15]. In [79], GAN provides a solution for the detection of Small Vessel Disease (SVD) by estimating the advancement of White Matter Hyper-intensities (WMH) during a year. Disease Evolution Predictor (DEP) model notices WMH in T2-weighted and T2- FLAIR MRIs. DEP-GAN (Disease Evolution Predictor GAN), an extension of visual attribution GAN (VA-GAN), uses an irregularity map (IM) or probability map (PM) for both input and output modalities to represent WMH. This generated image is called Disease Evolution Map (DEM), which classifies brain tissue voxel among progressing, regressing, or stable WMH groups.
B.
Quality Improvement:
High-resolution images are generated from down-sampled data during the MRI analysis to save the scan time. High-resolution images in one contrast improve the quality of down-sampled images in another contrast. The anatomical details of different contrast images refine the reconstruction quality of the image. This increase in image contrast is used for the classification of brain tumors [80]. The intensity distributions of pixels in brain MR images overlap in regions of interest (ROIs) that cause low tissue contrast and create problems in accurate tissue segmentation. The cycleGAN-based model increases the contrast within the tissue by using an attention mechanism. A multistage architecture focuses on a single tissue preliminary and filters out the irrelevant context in every stage to increase high tissue contrast (HTC) images’ resolution [81]. CycleGAN for unpaired data usually encodes the deformations and noises of various domains during synthetic image generation. The deformation invariant cycleGAN (DiCycleGAN) uses image alignment loss based on normalized mutual information (NMI) to strengthen the alignment between source and target domain data [82]. Generation of high-resolution MRI hippocampus region images from low-resolution MRI is arduous. Difficulty-aware GAN (da-GAN) is designed with dual discriminators and attention mechanisms in hippocampus regions for creating multimodality images. These HR images are deployed to improve hippocampal subfields classification accuracy compared to LR images [83]. In [84], Sequential GAN, a combination of two GANs, generates bi-modality images from common low-dimensional vectors. Sequential multimodal image production first creates images of one modality from low-dimensional vectors. These synthetic images are mapped to their counterparts in the other modality through image-to-image translation. The synthetic FLAIR images are not as realistic in terms of quality as synthetic T1-weighted and T2-weighted images. In [85], CGAN and the two parallel FCNs improve the quality of fake FLAIR images by retaining the contrast information of original FLAIR images. In [86], the proposed method discovers and learns global contrast from the label images to embed this information in the generated images. The capability of 2-way GAN coupled with global features in U-Net bypasses the need for paired ground truth. The multimodal images with a better perceptual quality improve the learning capability of the model.
C.
Single Network Generation:
Unified GAN, the improved version of starGAN [87], generates multiple contrasts of MR images from a single modality. StarGAN can perform image translations among multiple domains with one generator and one discriminator. The single-input multiple-output (SIMO) model is trained on four different modalities. The network learns the details from the multimodal MR images and analogous modality markers. The generator takes an image of one modality producing a target modality image, and then performs the second task of recreating the original modality image through a synthesized one [88]. The available methods of multimodal image generation target only the missing image production between two modalities. CycleGAN and pix2pixGAN can only create images from one modality to another; the former is used for unpaired images and the latter for paired images. Multimodality GAN (MGAN) simultaneously synthesizes three high-quality MR modalities (FLAIR, T1, and T1ce) from one MR modality-T2. Complementary information provided by these modalities boosts tumor segmentation accuracy. The architecture extends starGAN for paired multimodality MR images, adding modality labels to pix2pix. StarGAN brings the domain labels to cycleGAN and thus empowers a single network to translate an input image to any desired target domain for unpaired multidomain training images. Thus a single network translates a single modality T2 to any desired target modalities [89].

3.1.5. Image Denoising

The visual quality of MRI is vital for down-the-line operations on acquired scans, and the existing noise in the scans can alter the diagnosis result. Denoising is mostly a preprocessing step for image analysis similarly to segmentation and registration [90]. Image denoising and synthesis are utilized to study the complete manifold learning of brain MRI. The higher SNR value improves segmentation and registration tasks. A low-dimensional manifold is preferred for statistical comparisons and the generation of group representatives. T1-weighted brain MR images are generated by learning from 2D axial slices of brain MRI. Skip-connected auto-encoders are used for image denoising to traverse the manifold description of regular brains [91]. The Rician noise of MRI is the magnitude of the complex image data distribution. Structure-preserved denoising of 3D MRI images allows for exploring the similarity between neighboring slices. The Rician noise in MR images is removed by the residual encoder-decoder Wasserstein generative adversarial network (RED-GAN), where a 3D-CNN operates on 3D volume data. The generator is an auto-encoder proportionally containing convolutional and de-convolutional layers supported by a residual block [92]. Residual encoder-decoder up-sampling non-similar WGAN (REDUPNSWGAN) uses a filter-based method to remove Rician noise, preserving the structural association between the neighborhood slices in 3D MRI. WGAN measures Wasserstein distance to differentiate between ground-truth and dummy images using residual encoder-decoder. GNET removes Rician noise with the help of Huber loss. DNET calculates the loss of discriminated samples, and the feature extractor in AVGNET calculates the perceptual loss [93]. A hybrid denoising GAN removes noise from highly accelerated wave-controlled aliasing in parallel imaging (Wave-CAIPI) images with the help of a 3D generator and a 2D discriminator [94].

3.1.6. Segmentation

Brain tissue segmentation in an MRI scan provides vital biomarkers such as quantification of tissue atrophy, structural changes, and localization of abnormality are crucial in disease diagnosis. DL-based segmentation methods are finding success in the automatic mode of segmentation. The segmentation methods that use GAN-synthesized MR images for atrophy detection can be grouped into three categories, as shown in Figure 6 [95].
A.
Brain Tumor Segmentation:
Two standard techniques in brain tumor segmentation are patch-based and end-to-end methods. A multi-angle GAN-based framework fuses the synthetic images with the probability maps. The PatchGAN generator focuses on local image patches, randomly selects many fixed-size patches from an image, and normalizes all responses that improve the resultant image. The multichannel structure in the discriminator averages the responses to provide the output [96]. A 3D GAN performs brain tumor segmentation by combining label correction and sample reweighting, where the dual inference network works as a revised label mask generator [97]. The current glioma growth prediction is achieved by mathematical models based on complicated mathematical formulations of partial differential equations with few parameters resulting in insufficient patterns and other characteristics of gliomas. On the contrary, GANs prove the upper hand on mathematical models as they need not directly convert the probability density function to generate data. Plus, GANs can withstand overfitting by providing structured training. A 3D GAN stacks two GANs with conditional initialization of segmented feature maps for glioma growth prediction [98]. Tumor growth prediction needs multiple time points of the same patient’s single or multimodal medical images. Again a stacked 3D GAN, GP-GAN, is used for glioma growth prediction [99]. Deep convolutional GAN (DCGAN) first performs data augmentation by generating synthetic images to create a large data set. The image noise is also removed with the help of an adaptive median filter so that the resultant images are of superior features. After this preprocessing step, faster R-CNN uses this synthetic data for training, identifying, and locating tumors. The classification result is tumor placement under three types: meningioma, glioma, pituitary, and primary type [100]. Manual delineation of lesions such as glioma, Ischemic lesions, and Multiple Sclerosis from MR sequences is tedious. Discriminative machine learning techniques such as Random Forest, Support Vector Machines, and DL techniques such as CNN and autoencoders detect and segment lesions from MR scans. However, generative methods such as GANs can also employ convolution operators to learn the distribution parameters [101].
The class-conditional densities of lesions overlap because the pixel values of ROIs are distributed over the entire intensity range in MR scans. The existence of four major overlapping ROIs (non-enhancing tumor, enhancing, normal, and edema) of intensity distribution poses a challenge in the segmentation process. Enhancement and segmentation GAN (Enh-Seg-GAN) refines lesion contrast by including the classifier loss in model training, which estimates the central pixel labels of the sliding input patches. The CGAN generator modifies each pixel in the input image patch. It then forwards this to the Markovian discriminator. The synthetic image is concatenated with other fundamental modalities (FLAIR, T1c, and T2) to improve segmentation [102]. Feature concatenation-based squeeze and excitation-GAN (FCSE-GAN) appends the feature concatenation block to the generator network to reduce noise from the image and the squeeze and excitation block to the discriminator network to segment the brain tumor [103].
B.
Annotation:
The second group describes the methods used to perform segmentation without manual data labeling and how annotation tasks are necessary for DL models. The annotation of medical images is a tedious task requiring good medical prowess. The annotated datasets are an essential requirement for supervised machine learning. The supervised transfer learning (STL) method for domain adaptation trains the GAN model on a source domain dataset and then fine-tuned it on a target domain dataset. The inductive transfer learning (ITL) method extracts annotation labels of the target domain dataset from the trained source domain model using cycleGAN-based unsupervised domain adaptation (UDA) [104]. DCNN-based image segmentation methods are hard to generalize. A synthetic segmentation network (SynSeg-Net) trains a DCNN by unpaired source and target modality images without having manual labels on the target imaging modality. In [105], cycleGAN performs multi-atlas segmentation with a cycle synthesis subnet and segmentation subnet. GANs are designed to generate properly anonymized synthetic images to safeguard the patients’ privacy information. In [106], three GANs are trained on time-of-flight (TOF) magnetic resonance angiography (MRA) patches to create image labels for arterial brain vessel segmentation. Image labels created from deep convolutional GAN, Wasserstein-GAN with gradient penalty (WGAN-GP), and WGAN-GP with spectral normalization (WGAN-GP-SN) are applied to a second dataset using the transfer learning approach. The results of WGAN-GP and WGAN-GP-SN are superior to that of DCGAN. The structure of triple-GAN, which works on the principle of a three-player cooperative game, is modified to incorporate 3D transposed convolution in the generator. It performs tensor-train decomposition on all the classifier and discriminator layers and uses a high-order pooling module to take advantage of association within feature maps. The tensor-train decomposition, high-order pooling, and semi-supervised learning-based GAN (THS-GAN) classify MR images for AD diagnosis [107]. In normal conditions, human brains are relatively symmetric. However, the presence of any mass lesion generates asymmetry in the brain structure because it displaces normal brain tissue. The symmetric driven GAN (SD-GAN) learns a nonlinear mapping between the left and right brain images in unsupervised manifold learning to detect tumors from scans that do not require symmetry [108]. Segmentation tasks on medical images suffer the issues of generalization, overfitting, and insufficient annotated datasets. Guided GAN (GGAN) decimates the data points of an input image, due to which the size of the network is reduced, operating on only a few parameters [109].
C.
Multimodal Segmentation:
The shape or appearance model called Shape Constraint (SC-GAN) uses a Fully Convolutional Residual Network (FC-ResNet) fused with a shape representation model (SRM) for segmentation tasks on multimodal images in H&N cancer diagnosis. A pre-trained 3D convolutional auto-encoder is utilized for SRM as a regularizer in the training stage [110]. Multimodal segmentation should have acceptable results in both source and target domains. However, the domain shifts between multiple modalities make the learning task of divergent image features through a single model challenging. Three-dimensional unified GAN executes the auxiliary translation task by extracting the modality-invariant features and upgrading low-level information representations [111]. Hippocampal subfields segmentation based on SVM combined 3D CNN and GAN. Three-dimensional GAN-SVM acts as a generator and 3D CNN-SVM discriminator [112]. One2One CycleGAN is used in survival estimation extracting features from MRI multimodal images. A single ResNet-based generator creates the T1 image from the T2 samples and the T2 image from the T1 samples reducing overfitting and providing augmentation to create virtual samples [113]. MRIs are used to locate the abrasion of disease or to understand the fMRI-based effective connectivity (EC) within a set of brain regions. The task of locating the abrasion caused due to Multiple Sclerosis (MS) in brain images is a real challenge as there is much inconstancy in the intensity, size, shape, and location of these abrasions. In [114], GAN uses a single generator with multiple modalities and multiple discriminators, one for each modality, to identify the NxN patch as real or fake.

3.1.7. Reconstruction

Although MRI is one of the very sought-after imaging methods for physical and physiological reasons, scanning time causes concern for patients [115]. MRIs are reconstructed due to various reasons cited in Figure 7.
A.
MRI Acceleration:
The lengthy scanning process in which the samples are collected line-by-line in k-space (frequency domain and Fourier image space) is uncomfortable for the patients and becomes a reason for motion artifacts. The concept of accelerated MRI is crucial to tackling this issue. MRI is reconstructed from highly under-sampled (up to 20%) k-space data, especially in fetal, cardiac, functional MRI, multimodal acquisitions, and dynamic contrast enhancements. The acquisition time is lowered by less slice selection, reducing the spatial resolution. The sweep time can also be lessened by selecting a partial k-space and approximating the absent k-space points. A k-space and an image-space U-Net reconstruct the whole k-space matrix from under-sampled data [116]. The compressed sensing (CS) MRI scheme reduces the sweep time by considering a minor set of samples for image construction. Refine-GAN, the adapting fully-residual convolutional auto-encoder, and general GAN, is the base for fast and precise CS-MRI reconstruction. A chained network enhances the reconstruction quality [117]. Traditional CS-MRI is affected by slow iterations and noise-induced artifacts during the high acceleration factor. The RSCA-GAN uses spatial and channel-wise attention with long skip connections to improve the quality at each stage, accelerating the reconstruction process and removing the artifacts brought by fast-paced under-sampling [118]. Parallel imaging integrated with the GAN model (PI-GAN) and transfer learning accelerates MRI imaging with under-sampling in the k-space. The transfer learning removes the artifacts and yields smoother brain edges [119]. Reforming multi-contrast brain MR images from down-sampled data points can save scanning time [120].
B.
MR Slice Reconstruction:
MR slice reconstruction is performed to examine brain anatomy and surgery maneuverings as the modality provides high resolution. Thin-section images are 1 mm wide with a spacing gap of zero, while thick-section images are 4 mm to 6 mm wide with a spacing gap of 0.4 mm to 1 mm. The higher value of thickness leads to low resolution. The GAN and CNN are combined to reconstruct thin-section brain scans of newborns from thick-section ones. The first stage of the network is a Least-Square GAN (LS-GAN) with a 3D-Y-Net generator. This stage fuses the images of the axial and sagittal planes and maps them onto thin-section image space. The cascade of 3D-DenseU-Net and a stack of enhanced residual structures removes image artifacts and provides recalibrations and structural improvements in the sagittal plane [121]. Unsupervised medical anomaly detection (MAD)-GAN uses multiple adjacent brain MRI slice reconstructions to locate brain anomalies at different stages of multi-sequence structural MRI [122]. The edge generator of Edge-guided GAN (EG-GAN) joins the missing edges of low-resolution images and masks produced from missing slices in the through-plane as input. A contrast completion network employs these connected edges to predict the voxel intensities in the missing rows [123]. Conditional deep convolutional generative adversarial neural networks (CDCGANs) are used to forecast the advancement of AD by producing dummy MR images in the series arrangement. The atrophy is measured using the cortical ribbon (CR) fractal dimension (box-counting method). The method uses only one coronal slice of a patient’s baseline T1- image. A reducing fractal dimension ensures the progressing illness [124]. The brain multiplex image represents the brain connectivity status extracted from MRI scans. The association between the two brain regions of interest is quantified based on function, structure, and morphology. A single network, adversarial brain multiplex translator (ABMT), performs brain multiplex estimation and classification for the vision of gender-related distinction linkages. Brain multiplexes are constructed from a source network intra-layer, a target intra-layer, and a convolutional interlayer. The ABMT is the improved version of GT-GAN [125], which has pioneered graph or network translation. Contrary to conventional GAN, the generator (translator) of GT-GAN picks up the generic translation mapping from the source network to the target network simultaneously [126]. 3D CGAN and a local adaptive fusion method are used for quality FLAIR image synthesis. They synthesize each slice separately along the axial direction and concatenate them into a 3D image. This synthesis predicts the coronal and the sagittal direction by analyzing complete images or large image patches [127].
C.
Enhancement of Scan Efficiency:
CGAN enhances the scan efficiency of under-sampled and multi-contrast procured images. The shared high-frequency-prior present in the source contrast is used to maintain high-spatial-frequency features. The low-frequency-prior in the under-sampled target contrast is used to avert feature leakage or quality loss. The perceptual prior is used to upgrade the retrieval of high-level attributes. Reconstructing-synthesizing GAN (RS-GAN) generator estimates the target-contrast image from either fully sampled or partially under-sampled source-contrast image [128]. The tissue susceptibility in various brain diseases is measured via the quantitative susceptibility mapping (QSM) technique. The inherent issue of dipole inversion can affect the reliability of the susceptibility map. QSM-GAN is a 3D U-Net that solves the dipole inversion problem in QSM reconstruction [129]. A directed graph represents a brain-effective connectivity network where nodes denote brain regions. EC-RGAN is a recurrent GAN that applies effective connectivity generators to acquire the temporal information from the fMRI time series and refine the quality [130]. Double inversion recovery (DIR) improves FLAIR images acquired with a higher sensitivity for lesion diagnosis than conventional or fluid-attenuated T2-weighted scans. They are beneficial for detecting cortical plaques in MS. DiamondGAN can increase image details via multi-to-one mapping where various input modalities (in this case, T1, T2, and FLAIR) are utilized to produce one output modality (in this case, DIR) [131]. Three-dimensional multi-information GAN uses structural MRI to find cortical atrophy to predict disease progression. First, a 3D GAN model generates 3D MRI images at future-time points; then, a 3D-densenet-based multiclass classification identifies the stages of produced MRI [132]. Visual scenes can be reconstructed from human brain activity measured with fMRI. Dual-Variational Autoencoder/Generative Adversarial Network (DVAE/GAN) learns the mapping from fMRI signals to their corresponding visual stimuli (images). Cognitive Encoder, Visual Encoder, and GAN transform the high-dimensional and noisy brain signals (fMRI) into low-dimensional latent representation [133].
D.
Bias-free MRI Scan:
MRI scanners inherently produce bias, resulting in soft intensity changes across the scans. Two GANs were simultaneously trained to reconstruct the plain bias field and a bias-free MRI scan [134]. Ultrahigh-field MRI introduces high signal inhomogeneity in the scanned images, giving rise to different non-uniform power concentrations in the tissues. The regional Specific Absorption Rate (SAR) varies spatially and temporally with possible hubs in several hard-to-predict positions. A CGAN model can assess the subject-specific local SAR, otherwise hard to compute, and is rated by offline numerical simulations using generic body models. A CNN learns to portray the connection between subject-specific complex B1+ maps and the corresponding local SAR [135].

3.1.8. Motion Correction

MRI acquisition is a time consuming process, and keeping the head still in the scanner for the whole duration is challenging for patients. Subject motion in MRI scanning can introduce blurring and artifacts in the resultant images that severely deteriorate the image quality. Motion correction is the one crucial application during the preprocessing phase of the diagnosis [136,137] CGAN generates images devoid of artifacts from images distorted by motion. Combining a deep CNNs (DCNN) generator with a classifier discriminator introduces sharpness in the output of the DCNN [138]. Different acquisition protocols of motion-free and motion-corrupted MR data give alignment deformity. Cycle-MedGAN is free from dependency on co-registered datasets since unpaired images are used for training without prior co-registration or alignment. It employs the cycle-style and the cycle-perceptual losses functions to supervise the generator network and the self-attention architecture with convolutional layers and several residual blocks to improve the long-range spatial dependency for the motion correction task [139]. The supervised adversarial mode of correcting the motion artifacts raises the alignment imperfections that result from two distinct acquisition processes; one is motion-free, and the other is motion-corrupted. On the contrary, a co-registered dataset does not affect the unsupervised mode. A framework consisting of cycle-MedGAN [140,141,142] utilizes any paired datasets during training and a smaller subset of paired data for validation with its self-attention characteristic and a new loss function.

3.1.9. Data Augmentation

The success of DL models depends on large data samples, which is precisely the constraint of brain MRI. The data augmentation method is used to enlarge the training dataset to improve synthetic image quality without adding new samples to the set. The actions such as translating, rotating, flipping, stretching, and shearing the existing images in the dataset can augment the set; however, these methods lack diversity in the newly generated image samples. The training can become affected towards suboptimal results [143]. Generative modeling maintains similar features to the actual data set while developing the dummy version of the existing images. Deep convolutional GAN generates dummy images using strided convolutions to carry out upsampling in place of max-pooling layers [144]. A timely examination of the brain’s status is crucial in preventing Parkinson’s disease (PD) and hindering its spread. Automatic diagnosis methods use either single-view or multi-view scans to execute the classification or prediction of PD. A WGAN operates on multi-view samples from the MRI dataset containing the cross-sectional view (AXI) and the longitudinal view (SAG). The prodromal class with fewer AXI/SAG MRI data samples causes the problem of over-fitting or under-fitting in an application. Two ResNet networks are trained jointly on the two-view data to create more samples for the prodromal class in AXI and SAG [145].
Class imbalance is a significant issue in abnormal tissue identification and classification in medical analysis. In the case of imbalanced data, a predominant class is filled with usual vital samples, while a subsidiary class is with ailing samples. When a model is trained with a dataset with visible disparity, it generates biased results towards healthy data giving rise to predictable outputs by the network and low sensitivities. The class distribution can be balanced by re-sampling the data space, similarly to oversampling of the predominant class and under-sampling of the subsidiary class, construction of a new compact dataset in an iterative sampling manner by bypassing unessential details, ensemble sampling, and hybrid sampling. A pair-wise GAN architecture uses a cross-modality input to increase heterogeneity in the augmented images. GAN-augmented images are utilized in the pre-training phase, and then real brain MRIs complete advanced training leading to synthetic MR images from one modality to another [146]. Brain tumors are segregated into meningioma, glioma, and pituitary tumors. The direct resemblance in the three classes results in a complex classification procedure in MRI images. A multi-scale gradient GAN (MSG-GAN) synthesizes MRI images with meningioma disease and uses transfer learning to improve classification performance [147]. Noise-to-image and image-to-image GANs enhance the data augmentation (DA) effect. Progressive growing of GAN (PGGAN) is a multistage noise-to-image GAN used for high-resolution MR image generation. Refinement methods such as multimodal unsupervised image-to-image translation (MUNIT) or SimGAN rectify the texture and shape of the images produced by PGGAN close to the originals [148]. A moderate-sized glioma dataset can affect the precise brain tumor categorization using several MRI modalities such as T1-weighted, T1-weighted with contrast-enhanced, T2-weighted, and FLAIR. Pair-wise, GAN trained on two input channels, unlike the normal GAN with only one input channel, augments the brain images to the compact dataset [149]. Two types of perfusion modalities, dynamic susceptibility contrast (DSC) and dynamic contrast-enhanced (DCE), are used to generate realistic relative cerebral blood volume (RCBV). The CGAN is trained on brain tumor perfusion images to learn DSC and DCE parameters with a single gadolinium-based contrast agent administration [150]. AGGrGAN in [151] is a collection of three base GAN models—two variants of deep convolutional GAN (DCGAN) and a WGAN that generate synthetic MRI images of brain tumors. The model uses the style transfer technique, selects distributed features across the multiple latent spaces, and captures the local patterns to enhance the image resemblance.
In a stroke disease, the brain cells start dying due to insufficient blood to the brain (cerebral ischemia) or moments of internal bleeding (intracranial hemorrhage). CGAN generator is trained on specially altered lesion masks to create synthetic brain images to enlarge the training dataset. CNN segmentation network includes depth-wise-convolution-based X-blocks and feature similarity module (FSM) [152]. IsoData (Iterative Self-Organizing Data Analysis Technique) is an unsupervised segregation method measuring the means of the classes uniformly dispersed within data space and clusters the rest of them iteratively based on the minimum distance methods. Every iteration computes the new mean and classifies pixels. The WGAN-based process depends upon the image histogram by generalizing more than two classes and splitting, merging, and deleting the class depending on the input threshold parameters [153]. Functional connectivity GAN (FC-GAN) generates the functional brain connectivity (FC) patterns obtained from fMRI data amplifying the efficiency of the neural network classifier. VAE and WGAN-based network contains three parts, the encoder, the generator, and the discriminator [154].
The connectome-based sample generation is another approach for data augmentation. A generative adversarial neural network auto-encoder (AAE) framework produces synthetic structural brain connectivity instances of MS patients even for an unbalanced dataset [155]. The number of samples in the regular fMRI datasets is insufficient for training. Multiple GAN architecture generates new multi-subject fMRI points. The multiple GAN architectures used in this method are cycleGAN, starGAN, and RadialGAN and do not need label details to determine the relation matrix. The cycle-GAN is not expandable to multiple domains because of the N (N-1) mappings to be learned for N domains. StarGAN is expandable to multiple domains using a single generator for multidomain translation tasks. RadialGAN can successfully extend the target dataset by employing multiple source datasets [156]. Dual-encoder BiGAN architecture duplicates abnormal samples within a normal distribution. Anomaly detection in BiGAN reduces bad cycle consistency loss due to insufficient sample data information [157]. The approach in [158] generates annotated diffusion-weighted images (DWIs) of brains showing an ischemic stroke (IS). Realistic DWIs are generated from axial slices of these 3D segmentation maps with the help of three generative models: Pix2Pix, SPADE, and cycleGAN.

3.2. Loss Functions (RQ 2)

The structure of losses is crucial in the supervised training of GAN models for generating quality images. The discriminator loss is an attribute of the images produced by the generator and possesses a high value when the discriminator is incapable of discriminating between source and dummy images. Similarly, generator loss is a function of the performance of the discriminator and has a high value when it cannot generate images close to the authentic images. Both networks’ productivity improves when the model’s training is executed sequentially. Then suitable weights must be identified for the network to create more realistic images [144]. Loss functions used in the SLR are listed in Table 8. Adversarial loss, cycle consistency loss, L1 loss, L2 loss, perceptual loss, and WGAN loss are the primary loss functions and predominantly used in SLR. Figure 8 shows the distribution of these commonly used loss functions in the SLR.

3.3. Preprocessing of Ground Truth Brain MRI (RQ 3)

The training dataset for GAN contains brain MRIs that act as ground truth images to generate GAN-synthesized images. Preprocessing operations on these ground truth brain MRIs are crucial for the fidelity of the successive GAN performance. These tasks sharpens the image and removes the noise to enhance the qualitative and quantitative estimation [159]. Compared to machine learning techniques, the preprocessing is not exhaustive in the case of DL; still, the input image needs to undergo some treatments before feeding to the neural networks. Some regular image preprocessing steps [18,150] are discussed below:
  • Intensity Normalization:
MRIs from multiple centers acquired with scanners from distinct vendors and magnetic strengths show discrepancies in brightness and some induced noise [160]. To lessen these effects, intensity normalization is implemented by measuring the MR image’s intensity values variability [154]. The non-uniform voxel intensities of all volumes are standardized, and then each volume is normalized to obtain the zero mean and unit standard deviation. The patient-wise batch normalization can control the overfitting. Each patient scan is normalized by dividing each sequence by its mean intensity value. The task ensures that the distribution of intensity values is preserved [149].
  • Skull Stripping:
Skull stripping is removing the skull from images to focus on intracranial tissues [95].
  • Registration:
All input images are registered to the same imaging space during the registration phase. Registration is the spatial alignment of the images to a common anatomical space [46]. Various datasets come with different statuses of registration among training images. For example, MIDAS and IXI datasets carry unregistered images, while BRATS dataset carries already registered images [161].
  • Bias Field Correction:
This operation corrects the image contrast variations due to diverse magnetic field strengths [95].
  • Center Cropping:
In this operation, outer parts of each brain image volume The outer parts of each brain image volume are removed, reserving the central region along each dimension [88].
  • Data Augmentation:
Data augmentation is achieved through operations such as translation, flipping, re-sizing, scaling, rotation between −10 and 10 degrees, and Gaussian noise application [87,98].
  • Motion Correction:
Keeping the patient’s head stationary in the scanner during the MRI acquisition is challenging, especially for small and elderly age groups. This operation is performed to reduce the noise that arises in the scan due to subject motion.
The number and order of these steps may vary from case to case, depending on the application requirement. Figure 9 shows the percentage-wise use of these preprocessing operations in the SLR. Preprocessing software packages are available to perform above listed tasks (Table 9). The format conversion is applied to scanned MRI images. The source images are in digital imaging and communications in medicine (DICOM) format, which are converted to 3D images in neuroimaging informatics technology initiative (NIFTI) format, then sliced in the axial direction, and saved as joint photographic expert group (JPG) or portable network graphics PNG format (Figure 10).

3.4. Comparative Study of Evaluation Metric (RQ 4)

The success of a GAN depends upon the quality of the synthesized image. The evaluation metric that can quantitatively characterize the synthesis accuracy is necessary to comment on the quality of the synthetic image. There are two types of metrics; the first is Full-Reference (FR) Quality Metrics, where the quality of a synthetic image is measured to the quality of a ground-truth image. The second is No-Reference (NR) Quality Metrics, where the quality scores of the synthetic image are based on expected image statistics. Two commonly used evaluation metrics are discussed below. Table 10 presents the comparative study of existing evaluation metrics. Some of the metrics from the table assess the quality of the synthesized image as a whole, while others asses the image on patch-wise and are thus suitable for the segmentation task [69].

4. Discussion

A substantial amount of data samples is the primary requirement to implement deep learning algorithms. However, procuring adequate data is always a challenge in medical applications. This fact can restrict the wide and acceptable use of DL methods. The GAN models provide a reasonable solution to this problem. A substantial amount of data samples is the primary requirement to implement deep learning algorithms. The architecture of traditional GAN consists of two neural networks, the generator and the discriminator, working in tandem to create a fake source version. The generator updates its creation based on the feedback of the error function from the discriminator. The architecture utilizes Unsupervised learning to produce an image from the features of the real image. This SLR shows that GAN-synthesized images have potential use in data augmentation, yet they have also been immensely used for other image applications. In addition, GAN can speed up the whole analysis step in radiology [11]. This SLR includes papers published in Q1or Q2 journals and conferences in the Web of Science and Scopus databases. Future researchers can confirm the findings by exploring some other databases as well. However, to the best of our belief, the main conclusions drawn in this study would not change significantly. Some significant findings of the review are listed here:

4.1. GAN Variants

The conventional GAN model suffers from mode collapse, where the optimal discriminator does not provide enough information for the generator, leading to poor image generation by the generator. Many modifications are suggested and implemented in the basic GAN structure. The most popular models, such as CGAN, WGAN, cycleGAN, starGAN, and SRGAN are more robust and are used in image synthesis, translation, and super-resolution applications.
IN CGAN, a condition is set for the generator and the discriminator that acts as the controlling mode, giving a better presentation for image synthesis [163,164]. WGAN [164] provides stable training in conventional GAN by solving the network coverage problem and accelerating the training speed. The loss function is the distance between the two probability distributions known as the Wasserstein distance or Earth Mover’s (EM distance) [11]. CycleGAN is suitable for unpaired image translation and training stabilization. It has two generator networks mastering the two different mappings- source to target and target to source. It also has two discriminators that differentiate the synthesized image from the original image belonging to each domain. Evaluation of the resultant image is based on cycle-consistent losses to find the similarity between synthetic and original images [165]. StarGAN is suitable for cross-domain image translation using a single model. Multiple generators are not required to produce images from different domains. The mask vector method allows image translation between multiple datasets [87]. Figure 11 shows major GAN variants and their extensions.

4.2. Multimodal Image Generation

MR images often have several complementary modalities regarding anatomical information. In the last two years, the trend of research is more towards information extraction from multimodality images for better diagnosis. CycleGAN works on unpaired images, and Pix2pixGAN is suitable for paired image datasets. However, both architectures can only generate images from one modality to another and fail for multiple domains sample generation. Since it can learn only 1-to-1 mapping between two domains, N (N-1) structures would require finishing the learning for N domains. StarGAN provides a better solution for a single network working on unpaired multi-domain datasets. It presents domain labels to cycleGAN and selects the desired domain through a mask vector during translation [98,114]. RadialGAN is another technique suitable for multiple source datasets, works well for data augmentation when the data labels are continuous, and has a common latent space for all domains [166]. To study the information available in various domains, CollaGAN suggests a collaborative model including the details of multiple domains to generate images from an absent domain. CollaGAN secures the features in synthetic images through cycle consistency, similarly to starGAN. It executes n-to-1 translation using a one-hot mask vector in input that defines the target domain. Each N domain incorporates style and content code for shared latent representation [96].

5. Conclusions

AI researchers have adopted GAN for various image applications in recent years. Medical imaging is one prominent area where GAN-synthesized images can be helpful in multiple ways. This SLR presents a comprehensive study of applications of GAN-synthesized images for brain MRI. Over the years, the architecture of conventional GAN has been modified, and its versions, such as CGAN, WGAN, cycleGAN, and starGAN, show promising results in classifying and predicting brain diseases. It is speculated that GAN handles the scarce dataset problem of medical images in the best way possible. GAN-synthesized images have moved further from their obvious application in data augmentation. These images are being explored for image translation, registration, super-resolution, contrast enhancement, denoising, segmentation, reconstruction, and motion correction. Though many tasks are achievable with GAN implementation, there are still challenges in adopting GAN in real time. More works need to be performed regarding stability in GAN training, 3D GAN, and unsupervised learning modes.

Author Contributions

Conceptualization, S.T.; methodology, S.T.; data curation, S.T.; writing—original draft preparation, S.T.; writing—review and editing, S.T., M.B., S.G., K.K., V.V.; visualization, S.T.; supervision, M.B., S.G., K.K., V.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Currie, S.; Hoggard, N.; Craven, I.J.; Hadjivassiliou, M.; Wilkinson, I.D. Understanding MRI: Basic MR physics for physicians. Postgrad. Med. J. 2013, 89, 209–223. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Latif, G.; Kazmi, S.B.; Jaffar, M.A.; Mirza, A.M. Classification and Segmentation of Brain Tumor Using Texture Analysis. In Proceedings of the 9th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases, Stevens Point, WI, USA, 20–22 February 2010; pp. 147–155. [Google Scholar]
  3. Tiwari, A.; Srivastava, S.; Pant, M. Brain tumor segmentation and classification from magnetic resonance images: Review of selected methods from 2014 to 2019. Pattern Recognit. Lett. 2020, 131, 244–260. [Google Scholar] [CrossRef]
  4. Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  5. Shen, D.; Liu, T.; Peters, T.M.; Staib, L.H.; Essert, C.; Zhou, S.; Yap, P.-T.; Khan, A. Miccai 2019-Part 4; Springer: Berlin/Heidelberg, Germany, 2019; Volume 1, ISBN 9783030322519. [Google Scholar]
  6. Gudigar, A.; Raghavendra, U.; Hegde, A.; Kalyani, M.; Ciaccio, E.J.; Rajendra Acharya, U. Brain pathology identification using computer aided diagnostic tool: A systematic review. Comput. Methods Programs Biomed. 2020, 187, 105205. [Google Scholar] [CrossRef] [PubMed]
  7. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  8. Ali, H.; Biswas, R.; Ali, F.; Shah, U.; Alamgir, A.; Mousa, O.; Shah, Z. The role of generative adversarial networks in brain MRI: A scoping review. Insights Imaging 2022, 13, 98. [Google Scholar] [CrossRef]
  9. Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative Adversarial Networks: An Overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef] [Green Version]
  10. Rashid, M.; Singh, H.; Goyal, V. The use of machine learning and deep learning algorithms in functional magnetic resonance imaging—A systematic review. Expert Syst. 2020, 37, 1–29. [Google Scholar] [CrossRef]
  11. Vijina, P.; Jayasree, M. A Survey on Recent Approaches in Image Reconstruction. In Proceedings of the 2020 International Conference on Power, Instrumentation, Control and Computing (PICC), Thrissur, India, 17–19 December 2020. [Google Scholar] [CrossRef]
  12. Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report EBSE; Keele University: Keele, UK, 2007. [Google Scholar]
  13. Liu, W.; Hu, G.; Gu, M. The probability of publishing in first-quartile journals. Scientometrics 2016, 106, 1273–1276. [Google Scholar] [CrossRef]
  14. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 2017, 2017, 5967–5976. [Google Scholar] [CrossRef] [Green Version]
  15. Yang, Q.; Li, N.; Zhao, Z.; Fan, X.; Chang, E.I.C.; Xu, Y. MRI Cross-Modality Image-to-Image Translation. Sci. Rep. 2020, 10, 3753. [Google Scholar] [CrossRef] [Green Version]
  16. Kazemifar, S.; McGuire, S.; Timmerman, R.; Wardak, Z.; Nguyen, D.; Park, Y.; Jiang, S.; Owrangi, A. MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach. Radiother. Oncol. 2019, 136, 56–63. [Google Scholar] [CrossRef] [Green Version]
  17. Kazemifar, S.; Barragán Montero, A.M.; Souris, K.; Rivas, S.T.; Timmerman, R.; Park, Y.K.; Jiang, S.; Geets, X.; Sterpin, E.; Owrangi, A. Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors. J. Appl. Clin. Med. Phys. 2020, 21, 76–86. [Google Scholar] [CrossRef] [Green Version]
  18. Bourbonne, V.; Jaouen, V.; Hognon, C.; Boussion, N.; Lucia, F.; Pradier, O.; Bert, J.; Visvikis, D.; Schick, U. Dosimetric validation of a gan-based pseudo-ct generation for mri-only stereotactic brain radiotherapy. Cancers 2021, 13, 1082. [Google Scholar] [CrossRef]
  19. Tang, B.; Wu, F.; Fu, Y.; Wang, X.; Wang, P.; Orlandini, L.C.; Li, J.; Hou, Q. Dosimetric evaluation of synthetic CT image generated using a neural network for MR-only brain radiotherapy. J. Appl. Clin. Med. Phys. 2021, 22, 55–62. [Google Scholar] [CrossRef]
  20. Armanious, K.; Jiang, C.; Fischer, M.; Küstner, T.; Nikolaou, K.; Gatidis, S.; Yang, B. MedGAN: Medical image translation using GANs. Comput. Med. Imaging Graph. 2019, 79, 101684. [Google Scholar] [CrossRef]
  21. Tao, L.; Fisher, J.; Anaya, E.; Li, X.; Levin, C.S. Pseudo CT Image Synthesis and Bone Segmentation from MR Images Using Adversarial Networks with Residual Blocks for MR-Based Attenuation Correction of Brain PET Data. IEEE Trans. Radiat. Plasma Med. Sci. 2020, 5, 193–201. [Google Scholar] [CrossRef]
  22. Liu, X.; Emami, H.; Nejad-Davarani, S.P.; Morris, E.; Schultz, L.; Dong, M.; Glide-Hurst, C.K. Performance of deep learning synthetic CTs for MR-only brain radiation therapy. J. Appl. Clin. Med. Phys. 2021, 22, 308–317. [Google Scholar] [CrossRef]
  23. Emami, H.; Dong, M.; Glide-Hurst, C.K. Attention-Guided Generative Adversarial Network to Address Atypical Anatomy in Synthetic CT Generation. In Proceedings of the 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI), Las Vegas, NV, USA, 11–13 August 2020; pp. 188–193. [Google Scholar] [CrossRef]
  24. Abu-Srhan, A.; Almallahi, I.; Abushariah, M.A.M.; Mahafza, W.; Al-Kadi, O.S. Paired-unpaired Unsupervised Attention Guided GAN with transfer learning for bidirectional brain MR-CT synthesis. Comput. Biol. Med. 2021, 136, 104763. [Google Scholar] [CrossRef]
  25. Lei, Y.; Harms, J.; Wang, T.; Liu, Y.; Shu, H.K.; Jani, A.B.; Curran, W.J.; Mao, H.; Liu, T.; Yang, X. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med. Phys. 2019, 46, 3565–3581. [Google Scholar] [CrossRef]
  26. Uzunova, H.; Ehrhardt, J.; Handels, H. Memory-efficient GAN-based domain translation of high resolution 3D medical images. Comput. Med. Imaging Graph. 2020, 86, 101801. [Google Scholar] [CrossRef] [PubMed]
  27. Shafai-Erfani, G.; Lei, Y.; Liu, Y.; Wang, Y.; Wang, T.; Zhong, J.; Liu, T.; McDonald, M.; Curran, W.J.; Zhou, J.; et al. MRI-based proton treatment planning for base of skull tumors. Int. J. Part. Ther. 2019, 6, 12–25. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. Gong, K.; Yang, J.; Larson, P.E.Z.; Behr, S.C.; Hope, T.A.; Seo, Y.; Li, Q. MR-Based Attenuation Correction for Brain PET Using 3-D Cycle-Consistent Adversarial Network. IEEE Trans. Radiat. Plasma Med. Sci. 2021, 5, 185–192. [Google Scholar] [CrossRef] [PubMed]
  29. Matsui, T.; Taki, M.; Pham, T.Q.; Chikazoe, J.; Jimura, K. Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network. Front. Neuroinform. 2022, 15, 1–15. [Google Scholar] [CrossRef] [PubMed]
  30. Mehmood, M.; Alshammari, N.; Alanazi, S.A.; Basharat, A.; Ahmad, F.; Sajjad, M.; Junaid, K. Improved colorization and classification of intracranial tumor expanse in MRI images via hybrid scheme of Pix2Pix-cGANs and NASNet-large. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 4358–4374. [Google Scholar] [CrossRef]
  31. Nie, D.; Trullo, R.; Lian, J.; Wang, L. Medical Image Synthesis with Deep Convolutional Adversarial Networks. Physiol. Behav. 2016, 176, 100–106. [Google Scholar] [CrossRef]
  32. Kang, S.K.; Seo, S.; Shin, S.A.; Byun, M.S.; Lee, D.Y.; Kim, Y.K.; Lee, D.S.; Lee, J.S. Adaptive template generation for amyloid PET using a deep learning approach. Hum. Brain Mapp. 2018, 39, 3769–3778. [Google Scholar] [CrossRef]
  33. Wei, W.; Poirion, E.; Bodini, B.; Durrleman, S.; Ayache, N.; Stankoff, B.; Colliot, O. Predicting PET-derived demyelination from multimodal MRI using sketcher-refiner adversarial training for multiple sclerosis. Med. Image Anal. 2019, 58, 101546. [Google Scholar] [CrossRef]
  34. Gao, X.; Shi, F.; Shen, D.; Liu, M. Task-Induced Pyramid and Attention GAN for Multimodal Brain Image Imputation and Classification in Alzheimer’s Disease. IEEE J. Biomed. Health Inform. 2022, 26, 36–43. [Google Scholar] [CrossRef]
  35. Hu, S.; Lei, B.; Member, S.; Wang, S. Networks for Brain MR to PET Synthesis. IEEE Trans. Med. Imaging 2022, 41, 145–157. [Google Scholar] [CrossRef]
  36. Pan, Y.; Liu, M.; Lian, C.; Xia, Y.; Shen, D. Spatially-Constrained Fisher Representation for Brain Disease Identification with Incomplete Multi-Modal Neuroimages. IEEE Trans. Med. Imaging 2020, 39, 2965–2975. [Google Scholar] [CrossRef]
  37. Zotova, D.; Jung, J.; Laertizien, C. GAN-Based Synthetic FDG PET Images from T1 Brain MRI Can Serve to Improve Performance of Deep Unsupervised Anomaly Detection Models. In Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, Strasbourg, France, 27 September 2021; pp. 142–152. [Google Scholar] [CrossRef]
  38. Liu, H.; Nai, Y.H.; Saridin, F.; Tanaka, T.; O’ Doherty, J.; Hilal, S.; Gyanwali, B.; Chen, C.P.; Robins, E.G.; Reilhac, A. Improved amyloid burden quantification with nonspecific estimates using deep learning. Eur. J. Nucl. Med. Mol. Imaging 2021, 48, 1842–1853. [Google Scholar] [CrossRef]
  39. DL, H. Medical image registration. Phys. Med. Biol. 2001, 46, R1–R45. [Google Scholar] [CrossRef]
  40. Salehi, S.S.; Khan, S.; Erdogmus, D.; Gholipour, A. Real-time Deep Pose Estimation with Geodesic Loss for Image-to-Template Rigid Registration. Physiol. Behav. 2019, 173, 665–676. [Google Scholar] [CrossRef]
  41. Zheng, Y.; Sui, X.; Jiang, Y.; Che, T.; Zhang, S.; Yang, J.; Li, H. SymReg-GAN: Symmetric Image Registration with Generative Adversarial Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 5631–5646. [Google Scholar] [CrossRef]
  42. Liu, X.; Zhao, H.; Zhang, S.; Tang, Z. Brain Image Parcellation Using Multi-Atlas Guided Adversarial Fully Convolutional Network. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 723–726. [Google Scholar]
  43. Tang, Z.; Liu, X.; Li, Y.; Yap, P.T.; Shen, D. Multi-Atlas Brain Parcellation Using Squeeze-and-Excitation Fully Convolutional Networks. IEEE Trans. Image Process. 2020, 29, 6864–6872. [Google Scholar] [CrossRef]
  44. Fan, J.; Cao, X.; Wang, Q.; Yap, P.-T.; Shen, D. Adversarial Learning for Mono- or Multi-Modal Registration. Med. Image Anal. 2019, 58, 101545. [Google Scholar] [CrossRef]
  45. Mahapatra, D.; Ge, Z. Training Data Independent Image Registration with Gans Using Transfer Learning and Segmentation Information. In Proceedings of the International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 709–713. [Google Scholar]
  46. Yang, C.Y.; Huang, J.B.; Yang, M.H. Exploiting self-similarities for single frame super-resolution. Lect. Notes Comput. Sci. 2011, 6494, 497–510. [Google Scholar] [CrossRef] [Green Version]
  47. Greenspan, H.; Peled, S.; Oz, G.; Kiryati, N. MRI inter-slice reconstruction using super-resolution. Lect. Notes Comput. Sci. 2001, 2208, 1204–1206. [Google Scholar] [CrossRef] [Green Version]
  48. Zhu, J.; Yang, G.; Lio, P. How can we make gan perform better in single medical image super-resolution? A lesion focused multi-scale approach. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 1669–1673. [Google Scholar] [CrossRef] [Green Version]
  49. Chong, C.K.; Ho, E.T.W. Synthesis of 3D MRI Brain Images with Shape and Texture Generative Adversarial Deep Neural Networks. IEEE Access 2021, 9, 64747–64760. [Google Scholar] [CrossRef]
  50. Ahmad, W.; Ali, H.; Shah, Z.; Azmat, S. A new generative adversarial network for medical images super resolution. Sci. Rep. 2022, 12, 9533. [Google Scholar] [CrossRef] [PubMed]
  51. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
  52. Hongtao, Z.; Shinomiya, Y.; Yoshida, S. 3D Brain MRI Reconstruction based on 2D Super-Resolution Technology. IEEE Trans. Syst. Man. Cybern. Syst. 2020, 2020, 18–23. [Google Scholar] [CrossRef]
  53. Zhang, H.; Shinomiya, Y.; Yoshida, S. 3D MRI Reconstruction Based on 2D Generative Adversarial Network Super-Resolution. Sensors 2021, 21, 2978. [Google Scholar] [CrossRef] [PubMed]
  54. Delannoy, Q.; Pham, C.H.; Cazorla, C.; Tor-Díez, C.; Dollé, G.; Meunier, H.; Bednarek, N.; Fablet, R.; Passat, N.; Rousseau, F. SegSRGAN: Super-resolution and segmentation using generative adversarial networks—Application to neonatal brain MRI. Comput. Biol. Med. 2020, 120, 103755. [Google Scholar] [CrossRef] [PubMed]
  55. Zhu, J.; Tan, C.; Yang, J.; Yang, G.; Lio’, P. Arbitrary Scale Super-Resolution for Medical Images. Int. J. Neural Syst. 2021, 31. [Google Scholar] [CrossRef]
  56. Pham, C.; Meunier, H.; Bednarek, N.; Fablet, R.; Passat, N.; Rousseau, F.; De Reims, C.H.U.; Champagne-ardenne, D.R. Simultaneous Super-Resolution and Segmentation Using A Generative Adversarial Network: Application To Neonatal Brain MRI. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 991–994. [Google Scholar]
  57. Han, S.; Carass, A.; Schar, M.; Calabresi, P.A.; Prince, J.L. Slice profile estimation from 2D MRI acquisition using generative adversarial networks. Proc. Int. Symp. Biomed. Imaging 2021, 2021, 145–149. [Google Scholar] [CrossRef]
  58. Zhou, X.; Qiu, S.; Joshi, P.S.; Xue, C.; Killiany, R.J.; Mian, A.Z.; Chin, S.P.; Au, R.; Kolachalama, V.B. Enhancing magnetic resonance imaging-driven Alzheimer’s disease classification performance using generative adversarial learning. Alzheimer’s Res. Ther. 2021, 13, 60. [Google Scholar] [CrossRef]
  59. You, S.; Lei, B.; Wang, S.; Chui, C.K.; Cheung, A.C.; Liu, Y.; Gan, M.; Wu, G.; Shen, Y. Fine Perceptive GANs for Brain MR Image Super-Resolution in Wavelet Domain. IEEE Trans. Neural Networks Learn. Syst. 2022, 1–13. [Google Scholar] [CrossRef]
  60. Sun, L.; Chen, J.; Xu, Y.; Gong, M.; Yu, K.; Batmanghelich, K. Hierarchical Amortized GAN for 3D High Resolution Medical Image Synthesis. IEEE J. Biomed. Health Inform. 2022, 26, 3966–3975. [Google Scholar] [CrossRef]
  61. Sui, Y.; Afacan, O.; Jaimes, C.; Gholipour, A.W.S. Scan-Specific Generative Neural Network for MRI Super-Resolution Reconstruction. IEEE Trans. Med. Imaging 2022, 41, 1383–1399. [Google Scholar] [CrossRef]
  62. Katti, G.; Ara, S.A. A shireen Magnetic resonance imaging (MRI)–A review. Int. J. Dent. Clin. 2011, 3, 65–70. [Google Scholar]
  63. Revett, K. An Introduction to Magnetic Resonance Imaging: From Image Acquisition to Clinical Diagnosis. In Innovations in Intelligent Image Analysis. Studies in Computational Intelligence; Springer: Berlin/Heidelberg, Germany, 2011; pp. 127–161. ISBN 978-3-642-17933-4. [Google Scholar]
  64. Hofer, S.; Frahm, J. Topography of the human corpus callosum revisited-Comprehensive fiber tractography using diffusion tensor magnetic resonance imaging. Neuroimage 2006, 32, 989–994. [Google Scholar] [CrossRef]
  65. Mzoughi, H.; Njeh, I.; Wali, A.; Slima, M.B.; BenHamida, A.; Mhiri, C.; Mahfoudhe, K. Ben Deep Multi-Scale 3D Convolutional Neural Network (CNN) for MRI Gliomas Brain Tumor Classification. J. Digit. Imaging 2020, 33, 903–915. [Google Scholar] [CrossRef]
  66. Wang, G.; Gong, E.; Banerjee, S.; Martin, D.; Tong, E.; Choi, J.; Chen, H.; Wintermark, M.; Pauly, J.M.; Zaharchuk, G. Synthesize High-Quality Multi-Contrast Magnetic Resonance Imaging from Multi-Echo Acquisition Using Multi-Task Deep Generative Model. IEEE Trans. Med. Imaging 2020, 39, 3089–3099. [Google Scholar] [CrossRef]
  67. Dar, S.U.H.; Yurt, M.; Karacan, L.; Erdem, A.; Erdem, E.; Cukur, T. Image Synthesis in Multi-Contrast MRI with Conditional Generative Adversarial Networks. IEEE Trans. Med. Imaging 2019, 38, 2375–2388. [Google Scholar] [CrossRef] [Green Version]
  68. Sharma, A.; Hamarneh, G. Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network. IEEE Trans. Med. Imaging 2020, 39, 1170–1183. [Google Scholar] [CrossRef] [Green Version]
  69. Alogna, E.; Giacomello, E.; Loiacono, D. Brain Magnetic Resonance Imaging Generation using Generative Adversarial Networks. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, Australia, 1–4 December 2020; pp. 2528–2535. [Google Scholar] [CrossRef]
  70. Liu, X.; Xing, F.; El Fakhri, G.; Woo, J. A unified conditional disentanglement framework for multimodal brain mr image translation. Proc. Int. Symp. Biomed. Imaging 2021, 2021, 10–14. [Google Scholar] [CrossRef]
  71. Qu, Y.; Deng, C.; Su, W.; Wang, Y.; Lu, Y.; Chen, Z. Multimodal Brain MRI Translation Focused on Lesions. ACM Int. Conf. Proc. Ser. 2020, 352–359. [Google Scholar] [CrossRef]
  72. Liu, X.; Yu, A.; Wei, X.; Pan, Z.; Tang, J. Multimodal MR Image Synthesis Using Gradient Prior and Adversarial Learning. IEEE J. Sel. Top. Signal Process. 2020, 14, 1176–1188. [Google Scholar] [CrossRef]
  73. Yu, B.; Zhou, L.; Wang, L.; Shi, Y.; Fripp, J.; Bourgeat, P. Ea-GANs: Edge-Aware Generative Adversarial Networks for Cross-Modality MR Image Synthesis. IEEE Trans. Med. Imaging 2019, 38, 1750–1762. [Google Scholar] [CrossRef] [Green Version]
  74. Gao, Y.; Liu, Y.; Wang, Y.; Shi, Z.; Yu, J. A Universal Intensity Standardization Method Based on a Many-to-One Weak-Paired Cycle Generative Adversarial Network for Magnetic Resonance Images. IEEE Trans. Med. Imaging 2019, 38, 2059–2069. [Google Scholar] [CrossRef]
  75. Han, C.; Hayashi, H.; Rundo, L.; Araki, R.; Shimoda, W.; Muramatsu, S.; Furukawa, Y.; Mauri, G.; Nakayama, H. GAN-based synthetic brain MR image generation. Proc. Int. Symp. Biomed. Imaging 2018, 2018, 734–738. [Google Scholar] [CrossRef]
  76. Yu, B.; Zhou, L.; Wang, L.; Shi, Y.; Fripp, J.; Bourgeat, P. Sample-Adaptive GANs: Linking Global and Local Mappings for Cross-Modality MR Image Synthesis. IEEE Trans. Med. Imaging 2020, 39, 2339–2350. [Google Scholar] [CrossRef]
  77. Tomar, D.; Lortkipanidze, M.; Vray, G.; Bozorgtabar, B.; Thiran, J.P. Self-Attentive Spatial Adaptive Normalization for Cross-Modality Domain Adaptation. IEEE Trans. Med. Imaging 2021, 40, 2926–2938. [Google Scholar] [CrossRef]
  78. Shen, L.; Zhu, W.; Wang, X.; Xing, L.; Pauly, J.M.; Turkbey, B.; Harmon, S.A.; Sanford, T.H.; Mehralivand, S.; Choyke, P.L.; et al. Multi-Domain Image Completion for Random Missing Input Data. IEEE Trans. Med. Imaging 2021, 40, 1113–1122. [Google Scholar] [CrossRef] [PubMed]
  79. Rachmadi, M.F.; Valdés-Hernández, M.D.C.; Makin, S.; Wardlaw, J.; Komura, T. Automatic spatial estimation of white matter hyperintensities evolution in brain MRI using disease evolution predictor deep neural networks. Med. Image Anal. 2020, 63, 101712. [Google Scholar] [CrossRef] [PubMed]
  80. Kim, K.H.; Do, W.J.; Park, S.H. Improving resolution of MR images with an adversarial network incorporating images with different contrast. Med. Phys. 2018, 45, 3120–3131. [Google Scholar] [CrossRef] [PubMed]
  81. Hamghalam, M.; Wang, T.; Lei, B. High tissue contrast image synthesis via multistage attention-GAN: Application to segmenting brain MR scans. Neural Netw. 2020, 132, 43–52. [Google Scholar] [CrossRef] [PubMed]
  82. Wang, C.; Yang, G.; Papanastasiou, G.; Tsaftaris, S.A.; Newby, D.E.; Gray, C.; Macnaught, G.; MacGillivray, T.J. DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis. Inf. Fusion 2021, 67, 147–160. [Google Scholar] [CrossRef]
  83. Ma, B.; Zhao, Y.; Yang, Y.; Zhang, X.; Dong, X.; Zeng, D.; Ma, S.; Li, S. MRI image synthesis with dual discriminator adversarial learning and difficulty-aware attention mechanism for hippocampal subfields segmentation. Comput. Med. Imaging Graph. 2020, 86, 101800. [Google Scholar] [CrossRef]
  84. Yang, X.; Lin, Y.; Wang, Z.; Li, X.; Cheng, K.T. Bi-Modality Medical Image Synthesis Using Semi-Supervised Sequential Generative Adversarial Networks. IEEE J. Biomed. Health Inform. 2020, 24, 855–865. [Google Scholar] [CrossRef]
  85. Hagiwara, A.; Otsuka, Y.; Hori, M.; Tachibana, Y.; Yokoyama, K.; Fujita, S.; Andica, C.; Kamagata, K.; Irie, R.; Koshino, S.; et al. Improving the quality of synthetic FLAIR images with deep learning using a conditional generative adversarial network for pixel-by-pixel image translation. Am. J. Neuroradiol. 2019, 40, 224–230. [Google Scholar] [CrossRef] [Green Version]
  86. Naseem, R.; Islam, A.J.; Cheikh, F.A.; Beghdadi, A. Contrast Enhancement: Cross-modal Learning Approach for Medical Images. Proc. IST Int’l. Symp. Electron. Imaging: Image Process. Algorithms Syst. 2022, 34, IPAS-344. [Google Scholar] [CrossRef]
  87. Choi, Y.; Choi, M.; Kim, M.; Ha, J.W.; Kim, S.; Choo, J. StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8789–8797. [Google Scholar] [CrossRef] [Green Version]
  88. Dai, X.; Lei, Y.; Fu, Y.; Curran, W.J.; Liu, T.; Mao, H.; Yang, X. Multimodal MRI synthesis using unified generative adversarial networks. Med. Phys. 2020, 47, 6343–6354. [Google Scholar] [CrossRef]
  89. Xin, B.; Hu, Y.; Zheng, Y.; Liao, H. Multi-Modality Generative Adversarial Networks with Tumor Consistency Loss for Brain MR Image Synthesis. Proc. Int. Symp. Biomed. Imaging 2020, 2020, 1803–1807. [Google Scholar] [CrossRef]
  90. Mohan, J.; Krishnaveni, V.; Guo, Y. A survey on the magnetic resonance image denoising methods. Biomed. Signal Process. Control 2014, 9, 56–69. [Google Scholar] [CrossRef]
  91. Bermudez, C.; Plassard, A.; Davis, T.; Newton, A.; Resnick, S.; Landmana, B. Learning Implicit Brain MRI Manifolds with Deep Learning. Physiol. Behav. 2017, 176, 139–148. [Google Scholar] [CrossRef]
  92. Ran, M.; Hu, J.; Chen, Y.; Chen, H.; Sun, H.; Zhou, J.; Zhang, Y. Denoising of 3D magnetic resonance images using a residual encoder–decoder Wasserstein generative adversarial network. Med. Image Anal. 2019, 55, 165–180. [Google Scholar] [CrossRef] [Green Version]
  93. Christilin, D.M.A.B.; Mary, D.M.S. Residual encoder-decoder up-sampling for structural preservation in noise removal. Multimed. Tools Appl. 2021, 80, 19441–19457. [Google Scholar] [CrossRef]
  94. Li, Z.; Tian, Q.; Ngamsombat, C.; Cartmell, S.; Conklin, J.; Filho, A.L.M.G.; Lo, W.C.; Wang, G.; Ying, K.; Setsompop, K.; et al. High-fidelity fast volumetric brain MRI using synergistic wave-controlled aliasing in parallel imaging and a hybrid denoising generative adversarial network (HDnGAN). Med. Phys. 2022, 49, 1000–1014. [Google Scholar] [CrossRef]
  95. Akkus, Z.; Galimzianova, A.; Hoogi, A.; Rubin, D.L.; Erickson, B.J. Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. J. Digit. Imaging 2017, 30, 449–459. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  96. Chen, H.; Qin, Z.; Ding, Y.; Lan, T. Brain Tumor Segmentation with Generative Adversarial Nets. In Proceedings of the 2019 2nd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 25–28 May 2019; pp. 301–305. [Google Scholar] [CrossRef]
  97. Cheng, G.; Ji, H.; He, L. Correcting and reweighting false label masks in brain tumor segmentation. Med. Phys. 2021, 48, 169–177. [Google Scholar] [CrossRef] [PubMed]
  98. Elazab, A.; Wang, C.; Safdar Gardezi, S.J.; Bai, H.; Wang, T.; Lei, B.; Chang, C. Glioma Growth Prediction via Generative Adversarial Learning from Multi-Time Points Magnetic Resonance Images. Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBS 2020, 2020, 1750–1753. [Google Scholar] [CrossRef]
  99. Elazab, A.; Wang, C.; Gardezi, S.J.S.; Bai, H.; Hu, Q.; Wang, T.; Chang, C.; Lei, B. GP-GAN: Brain tumor growth prediction using stacked 3D generative adversarial networks from longitudinal MR Images. Neural Netw. 2020, 132, 321–332. [Google Scholar] [CrossRef] [PubMed]
  100. Sandhiya, B.; Priyatharshini, R.; Ramya, B.; Monish, S.; Sai Raja, G.R. Reconstruction, identification and classification of brain tumor using gan and faster regional-CNN. In Proceedings of the 2021 3rd International Conference on Signal Processing and Communication (ICPSC), Coimbatore, India, 13–14 May 2021; pp. 238–242. [Google Scholar] [CrossRef]
  101. Alex, V.; Safwan, K.P.M.; Chennamsetty, S.S.; Krishnamurthi, G. Generative adversarial networks for brain lesion detection. Med. Imaging 2017 Image Process. 2017, 10133, 101330G. [Google Scholar] [CrossRef]
  102. City, I. Transforming Intensity Distribution of Brain Lesions via Conditional Gans for Segmentation. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 1499–1502. [Google Scholar]
  103. Thirumagal, E.; Saruladha, K. Design of FCSE-GAN for dissection of brain tumour in MRI. In Proceedings of the 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE), Bengaluru, India, 9–10 October 2020; pp. 61–65. [Google Scholar] [CrossRef]
  104. Tokuoka, Y.; Suzuki, S.; Sugawara, Y. An inductive transfer learning approach using cycleconsistent adversarial domain adaptation with application to brain tumor segmentation. In Proceedings of the 2019 6th International Conference on Biomedical and Bioinformatics Engineering, Shanghai, China, 13–15 November 2019; pp. 44–48. [Google Scholar] [CrossRef]
  105. Huo, Y.; Xu, Z.; Moon, H.; Bao, S.; Assad, A.; Moyo, T.K.; Savona, M.R.; Abramson, R.G.; Landman, B.A. SynSeg-Net: Synthetic Segmentation without Target Modality Ground Truth. IEEE Trans. Med. Imaging 2019, 38, 1016–1025. [Google Scholar] [CrossRef]
  106. Kossen, T.; Subramaniam, P.; Madai, V.I.; Hennemuth, A.; Hildebrand, K.; Hilbert, A.; Sobesky, J.; Livne, M.; Galinovic, I.; Khalil, A.A.; et al. Synthesizing anonymized and labeled TOF-MRA patches for brain vessel segmentation using generative adversarial networks. Comput. Biol. Med. 2021, 131, 104254. [Google Scholar] [CrossRef]
  107. Yu, W.; Lei, B.; Ng, M.K.; Cheung, A.C.; Shen, Y.; Wang, S. Tensorizing GAN with High-Order Pooling for Alzheimer’s Disease Assessment. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 4945–4959. [Google Scholar] [CrossRef]
  108. Wu, X.; Bi, L.; Fulham, M.; Feng, D.D.; Zhou, L.; Kim, J. Unsupervised brain tumor segmentation using a symmetric-driven adversarial network. Neurocomputing 2021, 455, 242–254. [Google Scholar] [CrossRef]
  109. Asma-Ull, H.; Yun, I.D.; Han, D. Data Efficient Segmentation of Various 3D Medical Images Using Guided Generative Adversarial Networks. IEEE Access 2020, 8, 102022–102031. [Google Scholar] [CrossRef]
  110. Tong, N.; Gou, S.; Yang, S. Shape constrained fully convolutional DenseNet with adversarial training for multiorgan segmentation on head and neck CT and low-field MR images. Med. Phys. 2019, 46, 2669–2682. [Google Scholar] [CrossRef]
  111. Yuan, W.; Wei, J.; Wang, J.; Ma, Q.; Tasdizen, T. Unified generative adversarial networks for multimodal segmentation from unpaired 3D medical images. Med. Image Anal. 2020, 64, 101731. [Google Scholar] [CrossRef]
  112. Chen, Y.; Yang, X.; Cheng, K.; Li, Y.; Liu, Z.; Shi, Y. Efficient 3D Neural Networks with Support Vector Machine for Hippocampus Segmentation. In Proceedings of the 2020 International Conference on Artificial Intelligence and Computer Engineering (ICAICE), Beijing, China, 23–25 October 2020; pp. 337–341. [Google Scholar] [CrossRef]
  113. Fu, X.; Chen, C.; Li, D. Survival prediction of patients suffering from glioblastoma based on two-branch DenseNet using multi-channel features. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 207–217. [Google Scholar] [CrossRef]
  114. Zhang, C.; Song, Y.; Liu, S.; Lill, S.; Wang, C.; Tang, Z.; You, Y.; Gao, Y.; Klistorner, A.; Barnett, M.; et al. MS-GAN: GAN-Based Semantic Segmentation of Multiple Sclerosis Lesions in Brain Magnetic Resonance Imaging. In Proceedings of the 2018 Digital Image Computing: Techniques and Applications (DICTA), Canberra, Australia, 10–13 December 2018; pp. 1–8. [Google Scholar] [CrossRef]
  115. Lee, D.; Yoo, J.; Tak, S.; Ye, J.C. Deep residual learning for accelerated MRI using magnitude and phase networks. IEEE Trans. Biomed. Eng. 2018, 65, 1985–1995. [Google Scholar] [CrossRef] [Green Version]
  116. Shaul, R.; David, I.; Shitrit, O.; Riklin Raviv, T. Subsampled brain MRI reconstruction by generative adversarial neural networks. Med. Image Anal. 2020, 65, 101747. [Google Scholar] [CrossRef]
  117. Quan, T.M.; Nguyen-Duc, T.; Jeong, W.K. Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network with a Cyclic Loss. IEEE Trans. Med. Imaging 2018, 37, 1488–1497. [Google Scholar] [CrossRef] [Green Version]
  118. Li, G.; Lv, J.; Wang, C. A Modified Generative Adversarial Network Using Spatial and Channel-Wise Attention for CS-MRI Reconstruction. IEEE Access 2021, 9, 83185–83198. [Google Scholar] [CrossRef]
  119. Lv, J.; Li, G.; Tong, X.; Chen, W.; Huang, J.; Wang, C.; Yang, G. Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction. Comput. Biol. Med. 2021, 134, 104504. [Google Scholar] [CrossRef]
  120. Do, W.-J.; Seo, S.; Han, Y.; Chul Ye, J.; Hong Choi, S.; Park, S.-H. Reconstruction of multicontrast MR images through deep learning. Med. Phys. 2019, 47, 983–997. [Google Scholar] [CrossRef]
  121. Gu, J.; Li, Z.; Wang, Y.; Yang, H.; Qiao, Z.; Yu, J. Deep Generative Adversarial Networks for Thin-Section Infant MR Image Reconstruction. IEEE Access 2019, 7, 68290–68304. [Google Scholar] [CrossRef]
  122. Han, C.; Rundo, L.; Murao, K.; Noguchi, T.; Shimahara, Y.; Milacski, Z.Á.; Koshino, S.; Sala, E.; Nakayama, H.; Satoh, S. MADGAN: Unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction. BMC Bioinform. 2021, 22, 31. [Google Scholar] [CrossRef] [PubMed]
  123. Chai, Y.; Xu, B.; Zhang, K.; Lepore, N.; Wood, J.C. MRI restoration using edge-guided adversarial learning. IEEE Access 2020, 8, 83858–83870. [Google Scholar] [CrossRef] [PubMed]
  124. Wegmayr, V.; Horold, M.; Buhmann, J.M. Generative aging of brain MRI for early prediction of MCI-AD conversion. Proc. Int. Symp. Biomed. Imaging 2019, 2019, 1042–1046. [Google Scholar] [CrossRef]
  125. Guo, X.; Wu, L.; Zhao, L. Deep Graph Translation. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–10. [Google Scholar] [CrossRef] [PubMed]
  126. Nebli, A.; Rekik, I. Adversarial brain multiplex prediction from a single brain network with application to gender fingerprinting. Med. Image Anal. 2021, 67, 101843. [Google Scholar] [CrossRef]
  127. Wang, L. 3D Cgan Based Cross-Modality Mr Image Synthesis for Brain Tumor Segmentation. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; pp. 626–630. [Google Scholar]
  128. Dar, S.U.H.; Yurt, M.; Shahdloo, M.; Ildiz, M.E.; Tinaz, B.; Cukur, T. Prior-guided image reconstruction for accelerated multi-contrast mri via generative adversarial networks. IEEE J. Sel. Top. Signal Process. 2020, 14, 1072–1087. [Google Scholar] [CrossRef]
  129. Chen, Y.; Jakary, A.; Avadiappan, S.; Hess, C.P.; Lupo, J.M. QSMGAN: Improved Quantitative Susceptibility Mapping using 3D Generative Adversarial Networks with increased receptive field. Neuroimage 2020, 207, 116389. [Google Scholar] [CrossRef]
  130. Ji, J.; Liu, J.; Han, L.; Wang, F. Estimating Effective Connectivity by Recurrent Generative Adversarial Networks. IEEE Trans. Med. Imaging 2021, 40, 3326–3336. [Google Scholar] [CrossRef]
  131. Finck, T.; Li, H.; Grundl, L.; Eichinger, P.; Bussas, M.; Mühlau, M.; Menze, B.; Wiestler, B. Deep-Learning Generated Synthetic Double Inversion Recovery Images Improve Multiple Sclerosis Lesion Detection. Investig. Radiol. 2020, 55, 318–323. [Google Scholar] [CrossRef]
  132. Zhao, Y.; Ma, B.; Jiang, P.; Zeng, D.; Wang, X.; Li, S. Prediction of Alzheimer’s Disease Progression with Multi-Information Generative Adversarial Network. IEEE J. Biomed. Health Inform. 2021, 25, 711–719. [Google Scholar] [CrossRef]
  133. Ren, Z.; Li, J.; Xue, X.; Li, X.; Yang, F.; Jiao, Z.; Gao, X. Reconstructing seen image from brain activity by visually-guided cognitive representation and adversarial learning. Neuroimage 2021, 228, 117602. [Google Scholar] [CrossRef]
  134. Goldfryd, T.; Gordon, S.; Raviv, T.R. Deep Semi-Supervised Bias Field Correction of Mr Images. In Proceedings of the 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), Nice, France, 13–16 April 2021; pp. 1836–1840. [Google Scholar]
  135. Meliadò, E.F.; Raaijmakers, A.J.E.; Sbrizzi, A.; Steensma, B.R.; Maspero, M.; Savenije, M.H.F.; Luijten, P.R.; van den Berg, C.A.T. A deep learning method for image-based subject-specific local SAR assessment. Magn. Reson. Med. 2020, 83, 695–711. [Google Scholar] [CrossRef] [Green Version]
  136. Parkes, L.; Fulcher, B.; Yücel, M.; Fornito, A. An evaluation of the efficacy, reliability, and sensitivity of motion correction strategies for resting-state functional MRI. Neuroimage 2018, 171, 415–436. [Google Scholar] [CrossRef]
  137. Yendiki, A.; Koldewyn, K.; Kakunoori, S.; Kanwisher, N.; Fischl, B. Spurious group differences due to head motion in a diffusion MRI study. Neuroimage 2014, 88, 79–90. [Google Scholar] [CrossRef]
  138. Johnson, P.M.; Drangova, M. Conditional generative adversarial network for 3D rigid-body motion correction in MRI. Magn. Reson. Med. 2019, 82, 901–910. [Google Scholar] [CrossRef]
  139. Armanious, K.; Gatidis, S.; Nikolaou, K.; Yang, B.; Thomas, K. Retrospective Correction of Rigid and Non-Rigid Mr Motion Artifacts Using Gans. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 1550–1554. [Google Scholar]
  140. Küstner, T.; Armanious, K.; Yang, J.; Yang, B.; Schick, F.; Gatidis, S. Retrospective correction of motion-affected MR images using deep learning frameworks. Magn. Reson. Med. 2019, 82, 1527–1540. [Google Scholar] [CrossRef]
  141. Wolterink, J.M.; Dinkla, A.M.; Savenije, M.H.F.; Seevinck, P.R.; van den Berg, C.A.T.; Išgum, I. Deep MR to CT synthesis using unpaired data. Lect. Notes Comput. Sci. 2017, 10557, 14–23. [Google Scholar] [CrossRef] [Green Version]
  142. Armanious, K.; Jiang, C.; Abdulatif, S.; Küstner, T.; Gatidis, S.; Yang, B. Unsupervised medical image translation using Cycle-MeDGAN. In Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO), A Coruna, Spain, 2–6 September 2019. [Google Scholar] [CrossRef] [Green Version]
  143. Sajjad, M.; Khan, S.; Muhammad, K.; Wu, W.; Ullah, A.; Baik, S.W. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J. Comput. Sci. 2019, 30, 174–182. [Google Scholar] [CrossRef]
  144. Rejusha, R.R.T.; Vipin Kumar, S.V.K. Artificial MRI Image Generation using Deep Convolutional GAN and its Comparison with other Augmentation Methods. In Proceedings of the 2021 International Conference on Communication, Control and Information Sciences (ICCISc), Idukki, India, 16–18 June 2021. [Google Scholar] [CrossRef]
  145. Zhang, X.; Yang, Y.; Wang, H.; Ning, S.; Wang, H. Deep Neural Networks with Broad Views for Parkinson’s Disease Screening. In Proceedings of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November 2019; pp. 1018–1022. [Google Scholar] [CrossRef]
  146. Ge, C.; Gu, I.Y.H.; Store Jakola, A.; Yang, J. Cross-Modality Augmentation of Brain Mr Images Using a Novel Pairwise Generative Adversarial Network for Enhanced Glioma Classification. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 559–563. [Google Scholar] [CrossRef]
  147. Deepak, S.; Ameer, P.M. MSG-GAN Based Synthesis of Brain MRI with Meningioma for Data Augmentation. In Proceedings of the 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), Bangalore, India, 2–4 July 2020. [Google Scholar] [CrossRef]
  148. Han, C.; Rundo, L.; Araki, R.; Nagano, Y.; Furukawa, Y.; Mauri, G.; Nakayama, H.; Hayashi, H. Combining noise-to-image and image-to-image GANs: Brain MR image augmentation for tumor detection. IEEE Access 2019, 7, 156966–156977. [Google Scholar] [CrossRef]
  149. Ge, C.; Gu, I.Y.H.; Jakola, A.S.; Yang, J. Enlarged Training Dataset by Pairwise GANs for Molecular-Based Brain Tumor Classification. IEEE Access 2020, 8, 22560–22570. [Google Scholar] [CrossRef]
  150. Sanders, J.W.; Chen, H.S.M.; Johnson, J.M.; Schomer, D.F.; Jimenez, J.E.; Ma, J.; Liu, H.L. Synthetic generation of DSC-MRI-derived relative CBV maps from DCE MRI of brain tumors. Magn. Reson. Med. 2021, 85, 469–479. [Google Scholar] [CrossRef] [PubMed]
  151. Mukherkjee, D.; Saha, P.; Kaplun, D.; Sinitca, A.; Sarkar, R. Brain tumor image generation using an aggregation of GAN models with style transfer. Sci. Rep. 2022, 12, 9141. [Google Scholar] [CrossRef] [PubMed]
  152. Wu, W.; Lu, Y.; Mane, R.; Guan, C. Deep Learning for Neuroimaging Segmentation with a Novel Data Augmentation Strategy. Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBS 2020, 2020, 1516–1519. [Google Scholar] [CrossRef]
  153. Biswas, A.; Bhattacharya, P.; Maity, S.P.; Banik, R. Data Augmentation for Improved Brain Tumor Segmentation. IETE J. Res. 2021, 1–11. [Google Scholar] [CrossRef]
  154. Geng, X.; Yao, Q.; Jiang, K.; Zhu, Y.Q. Deep Neural Generative Adversarial Model based on VAE + GAN for Disorder Diagnosis. In Proceedings of the 2020 International Conference on Internet of Things and Intelligent Applications (ITIA), Zhenjiang, China, 27–29 November 2020. [Google Scholar] [CrossRef]
  155. Barile, B.; Marzullo, A.; Stamile, C.; Durand-Dubief, F.; Sappey-Marinier, D. Data augmentation using generative adversarial neural networks on brain structural connectivity in multiple sclerosis. Comput. Methods Programs Biomed. 2021, 206, 106113. [Google Scholar] [CrossRef] [PubMed]
  156. Li, D.; Du, C.; Wang, S.; Wang, H.; He, H. Multi-subject data augmentation for target subject semantic decoding with deep multi-view adversarial learning. Inf. Sci. 2021, 547, 1025–1044. [Google Scholar] [CrossRef]
  157. Budianto, T.; Nakai, T.; Imoto, K.; Takimoto, T.; Haruki, K. Dual-encoder Bidirectional Generative Adversarial Networks for Anomaly Detection. In Proceedings of the 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), Miami, FL, USA, 14–17 December 2020; pp. 693–700. [Google Scholar] [CrossRef]
  158. Platscher, M.; Zopes, J.; Federau, C. Image translation for medical image generation: Ischemic stroke lesion segmentation. Biomed. Signal Process. Control 2022, 72, 103283. [Google Scholar] [CrossRef]
  159. Gu, Y.; Peng, Y.; Li, H. AIDS Brain MRIs Synthesis via Generative Adversarial Networks Based on Attention-Encoder. In Proceedings of the 2020 IEEE 6th International Conference on Computer and Communications (ICCC), Chengdu, China, 11–14 December 2020; pp. 629–633. [Google Scholar] [CrossRef]
  160. Kneoaurek, K.; Ivanovic2, M.; Weber, D.A. Medical image registration. Eur. News 2000, 31, 5–8. [Google Scholar] [CrossRef]
  161. Sun, Y.; Gao, K.; Wu, Z.; Li, G.; Zong, X.; Lei, Z.; Wei, Y.; Ma, J.; Yang, X.; Feng, X.; et al. Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge. IEEE Trans. Med. Imaging 2021, 40, 1363–1376. [Google Scholar] [CrossRef]
  162. Song, X.W.; Dong, Z.Y.; Long, X.Y.; Li, S.F.; Zuo, X.N.; Zhu, C.Z.; He, Y.; Yan, C.G.; Zang, Y.F. REST: A Toolkit for resting-state functional magnetic resonance imaging data processing. PLoS ONE 2011, 6, e25031. [Google Scholar] [CrossRef]
  163. Gu, Y.; Zeng, Z.; Chen, H.; Wei, J.; Zhang, Y.; Chen, B.; Li, Y.; Qin, Y.; Xie, Q.; Jiang, Z.; et al. MedSRGAN: Medical images super-resolution using generative adversarial networks. Multimed. Tools Appl. 2020, 79, 21815–21840. [Google Scholar] [CrossRef]
  164. Roychowdhury, S.; Roychowdhury, S. A Modular Framework to Predict Alzheimer’s Disease Progression Using Conditional Generative Adversarial Networks. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020; pp. 12–19. [Google Scholar] [CrossRef]
  165. Rezaei, M.; Yang, H.; Meinel, C. Generative Adversarial Framework for Learning Multiple Clinical Tasks. In Proceedings of the 2018 Digital Image Computing: Techniques and Applications (DICTA), Canberra, Australia, 10–13 December 2018; pp. 1–8. [Google Scholar] [CrossRef]
  166. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved training of wasserstein GANs. Adv. Neural Inf. Process. Syst. 2017, 2017, 5768–5778. [Google Scholar]
  167. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
  168. Mathieu, M.; Couprie, C.; LeCun, Y. Deep multi-scale video prediction beyond mean square error. In Proceedings of the 4th International Conference on Learning Representations, ICLR 2016, Conference Track Proceedings, San Juan, Puerto Rico, 2–4 May 2016; pp. 1–14. [Google Scholar]
Figure 1. Block diagram for GAN-synthesized brain MRI.
Figure 1. Block diagram for GAN-synthesized brain MRI.
Futureinternet 14 00351 g001
Figure 2. The number of research papers published in a particular year on GAN-based brain MRI.
Figure 2. The number of research papers published in a particular year on GAN-based brain MRI.
Futureinternet 14 00351 g002
Figure 3. Flow diagram of the retrieval and selection process.
Figure 3. Flow diagram of the retrieval and selection process.
Futureinternet 14 00351 g003
Figure 4. Applications of GAN-synthesized images for brain MRI.
Figure 4. Applications of GAN-synthesized images for brain MRI.
Futureinternet 14 00351 g004
Figure 5. Grouping of contrast enhancement methods.
Figure 5. Grouping of contrast enhancement methods.
Futureinternet 14 00351 g005
Figure 6. Grouping of segmentation methods.
Figure 6. Grouping of segmentation methods.
Futureinternet 14 00351 g006
Figure 7. Grouping of reconstruction methods.
Figure 7. Grouping of reconstruction methods.
Futureinternet 14 00351 g007
Figure 8. Distribution of commonly used loss functions in the SLR.
Figure 8. Distribution of commonly used loss functions in the SLR.
Futureinternet 14 00351 g008
Figure 9. Distribution of preprocessing operations performed on ground truth brain MRI.
Figure 9. Distribution of preprocessing operations performed on ground truth brain MRI.
Futureinternet 14 00351 g009
Figure 10. Format conversion of ground truth brain MRI.
Figure 10. Format conversion of ground truth brain MRI.
Futureinternet 14 00351 g010
Figure 11. Major GAN variants and their extensions.
Figure 11. Major GAN variants and their extensions.
Futureinternet 14 00351 g011
Table 1. Prior research in MRI brain imaging.
Table 1. Prior research in MRI brain imaging.
Ref. No.Year ObjectiveImaging ModalityDL MethodsType
[8]2022It summarizes GAN’s role in brain MRI.MRIGANScoping Review
[9]2018The paper gives GAN training, architecture, and a few application details.All typeGANOverview
[10]2020It summarizes machine learning and DL classification methods.MRICNN 1, RNN 2, GAN, DBM 3Review
[11]2020It discusses GAN’s application in radiology. They have quantitatively compared the performance metrics for synthetic images.CT, MRI, PET, and X-rayGAN, CNNSLR
1 CNN: Convolution neural network; RNN: 2 Recurrent neural network; 3 DBM: Deep Boltzmann machine.
Table 2. Research questions for this SLR.
Table 2. Research questions for this SLR.
NumberResearch QuestionsMotivation
RQ 1What are the applications of GAN-synthesized images for brain MRI? The question divides the available literature into more clear categories.
RQ 2What are the most commonly used loss functions in GAN-synthesized image applications for brain MRI?Loss function affects the training of GAN.
RQ 3What are the preprocessing steps performed on ground truth brain MRI? Preprocessing steps performed on ground truth brain MRIs are crucial for the fidelity of the successive GAN operations.
RQ 4How to compare the existing evaluation metrics for GAN-synthesized brain MRI?This question encourages a comparative study of available evaluation metrics.
Table 3. Search query used for paper selection related to MRI brain imaging.
Table 3. Search query used for paper selection related to MRI brain imaging.
DatabaseQueryInitial Result
Web of Science((MR Imaging) OR (MRI) OR (magnetic resonance imaging)) AND ((Brain Imaging) OR (Brain Images)) AND (GAN OR Generative Adversarial Network)210
Scopus((MR Imaging) OR (MRI) OR (magnetic resonance imaging)) AND ((Brain Imaging) OR (Brain Images)) AND (GAN OR Generative Adversarial Network)389
Table 4. List of inclusion and exclusion criteria.
Table 4. List of inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
The original and empirical research studies.
Research studies other than the English language.
The research studies published between 2017 and 2022.
Research studies present in both databases.
The research studies providing the answer to any of the RQs.
Research studies unavailable in their full-text form.
Research studies including search keywords in the title, abstract, or full text.
Research studies with source modality images other than MRI.
Research studies from the Q1 or Q2 journal or a recent conference.
Research studies with non-human brain images.
Research studies with no GAN involvement for image generation.
Table 5. Summary of GAN-synthesized images used for translation.
Table 5. Summary of GAN-synthesized images used for translation.
Ref. No.GAN ModelTechnique
MRI-to-CT
[16]GANMI to avoid the issue of unregistered data
[17]CGANMI and binary cross entropy are the discriminator loss functions to achieve the task-specific goal
[18]CGANPixel loss penalizes pixel-wise differences between the real and SCT scan
[19]CGANThe method calculates dosimetric accuracy by SCT generation
[20]MedGANNon-adversarial losses (combination of style loss and content loss) to obtain high- and low-frequency details of image
[21]CGANResidual blocks are inserted into the CGAN network
[22]GANImage-guided radiation therapy
[23]AttentionGANThe attention network helps to predict the regions of interest
[24]UAGGANThe model identifies the area of the image and puts in a suitable translation to that location
[25]CycleGANA dense block allows better one-to-one mapping
[26]CGANConstant image patch size. Memory requirement is independent of the image size
MRI-to-PET
[31]LAGANLocality adaptive convolution where the same kernel for every input modality
[32]GANGenerate adaptive PET templates
[33]Sketcher-Refiner GANGenerates PET-derived myelin content map from four MRI modalities
[34]TPA-GANIntegrates pyramid convolution and attention module
[35]BMGANUse of image contexts and latent vectors for a generation
Table 6. Summary of GAN-synthesized images used for registration.
Table 6. Summary of GAN-synthesized images used for registration.
Ref. No.GAN ModelTechnique
[40]CGANSlice-to-volume registration
[41]CycleGANSymmetric registration
[42]GANMulti-atlas-based brain image parcellation
[43]GANMulti-atlas-guided deep learning parcellation
[44]GAN3D image registration
[45]GANTransfer learning for registration
Table 7. Summary of GAN-synthesized images used for super-resolution.
Table 7. Summary of GAN-synthesized images used for super-resolution.
Ref. No.GAN ModelTechnique
[48]MSGANLesion-Focused SR method
[49]SRGANUse of shaping network
[50]SRGANProgressive upscaling method to generate true colors
[52]ESRGANSlices from 3 latitudes are used for SR
[53]NESRGANNoise and interpolated sampling
[54]MedSRGANResidual whole map attention to interpolate
[55]GANMedical image arbitrary-scale super-resolution method
[57]GANImproving resolution of through-plane slices
[58]GANThe image resolution of 1.5-T scanner is made equivalent to 3-T scanner.
[59]FPGANUse a divide-and-conquer manner with multiple subbands in the wavelet domain
[60]End-to-end GANUses a hierarchical structure
Table 8. Summary of loss functions used in applications of GAN-synthesized brain MRI.
Table 8. Summary of loss functions used in applications of GAN-synthesized brain MRI.
Loss FunctionDescriptionProbability—
Based (Yes/No)
Ref. No.
Commonly used loss functions
Adversarial lossThe adversarial loss function is created in the repeated production and classification cycle. The generator minimizes the loss function, and the discriminator maximizes it.
L G A N (G,D) = Ex,y [log (D(x,y)] + Ex,y [log(1 − D(x,G(x,z))] where y is the ground truth image, G is the generator network, G (x,z) is generated image, D is the discriminator network.
Yes[20,25,35,41,45,53,54,55,57,59,67,72,74,77,81,83]
Cycle consistency lossA cycle consistency loss allows the generator to learn a one-to-one mapping from the input image field to the target image field. L c y c ϰ G = E x ~ P d a t a x   [ G G x x | 1 ] Yes[24,28,45,67,71,74,81,82,88]
L1 lossThe L1 loss also called mean absolute error (MAE), is a pixel-wise error that shows over-smoothing in resultant images. L L 1 ( G ) = 1 n   [ y G ( x z ) 1 ] where n is the number of voxels in an image, ||.||1 is the sum of voxel-wise residualsNo[19,20,21,23,25,26,33,34,35,37,52,55]
L2 lossThe L2 loss also called Mean Squared Distance (MSD) indicates the error between generated and original images and gives faint images. L L 2 ( G ) = 1 n   [ y G ( x z ) 2 ] where ||.||2 is the sum of squared voxel-wise residuals of intensity value.No[50,51,68,69,80]
Perceptual lossPixel-reconstruction losses give blurry effects in the final outputs and cannot express the image’s perceptual quality. The perceptual loss is the Euclidean distance in feature space to extract semantic features from target images.
L P e r c e p t u a l ( G ) = 1 w h d G x y F 2 where ∅ is a feature extractor, and w, h, and d represent the dimensions of feature maps.
No[20,35,37,52,53,55,56,66,67,92]
Wasserstein lossWGAN evaluates the Earth Mover’s distance by training the discriminator network and is bounded by a Lipschitz constraint.
L W G A N ( D ) = E y ~ P r [ D ( y ) ] + E x ~ P n [ D ( G ( x ) ) ] + λ E x ^ ~ P x ^   [ ( x ^   D x ^ 2 1 ) 2 ] where x is sampled from real image ‘r’ and noise ‘n’ is the hyperparameter.
Yes[29,48,49,56,86,92,93,103]
Other loss functions
Attention regularization lossIt ensures learning orthogonal attention maps. No[77]
Binary cross entropy (BCE) lossThe negative of the logarithm function is used for predicting the probability during binary classification.Yes[34,42,43,50,58,65,69]
Classification lossIt is the average cross-entropy value and the discriminator’s logistic sigmoid result. Yes[58,102]
Cycle-perceptual lossThis loss captures the high-level perceptual errors between original and dummy images. No[142]
Fidelity lossThe fidelity loss factor indicates the dissimilarity between the fake and the spatial normalized image and is generally added to the discriminator loss function. No[115,116]
Gradient difference (GD) lossThe GD loss is the gradient difference between the original and dummy images that retain the sharpness in the synthetic images.No[86,88,89]
Identity LossThis loss is responsible for colors and intensities conservation. Yes[77]
Image alignment lossIt is based on normalized mutual information (NMI) and used for information fusion.Yes[82]
Mean p distance (MPD)The lp-norm or mean p distance (MPD) measures the distance between synthetic and original images.No[25,27]
Mutual information lossMutual Information (MI) finds the “information content” in one variable when another variable is fully observed and used as the loss function. Yes[16,17,103]
Multi-scale L1 lossMulti-scale features variance between the predicted multi-channel probability map and the actual image.No[42,43]
Registration lossThis loss penalizes the variance between the translated & transformed image and stimulates local smoothness.No[41]
Self-adaptive Charbonnier lossIt is the pixel-wise differences between real and fake images.No[140]
Style-transfer lossStyle-transfer loss enhances the texture and fine structure of the desired target images. Yes[140,142]
Supervision lossThis loss, denoted by cumulative squared error, measures pixel shifts between original and synthetic images. No[41]
Symmetry lossIt stresses inverse consistency in the predicted transformations.No[41]
Synthetic consistency lossThis loss balances the mean absolute error (MAE) and gradient difference (GD), indicating how the generated image lags behind the target image. No[72]
Voxel-wise lossThis loss can be imposed as a pixel-level penalty between the translated and the original image applicable to only paired datasets. Yes[66,77,83]
Table 9. Summary of preprocessing software packages commonly used for brain MRI.
Table 9. Summary of preprocessing software packages commonly used for brain MRI.
Preprocessing SoftwareURLUseRef. No.
Freesurferhttp://surfer.nmr.mgh.harvard.edu (accessed on 26 August 2022)Skull-stripping, Registration, fMRI Analysis[114,162]
Functional magnetic resonance imaging of the Brain Software Library (FSL) http://fsl.fmrib.ox.ac.uk/ (accessed on 26 August 2022)Registration, alignment, Skull-stripping[115,162]
Advanced Normalization Tool (ANT) http://stnava.github.io/ANTs/ (accessed on 26 August 2022)Registration[5,49,136,163,164]
Statistical Parameter Mapping (SPM) http://www.fil.ion.ucl.ac.uk/spm (accessed on 26 August 2022)Skull-stripping[74,121]
Velocity (Varian)https://www.varian.com/ (accessed on 26 August 2022)Registration[25,27,165]
Data Processing Assistant for Resting-State fMRI (DPARSF) http://www.restfmri.net (accessed on 26 August 2022)Data processing of fMRI[166]
Elastixhttps://elastix.lumc.nl/ (accessed on 26 August 2022)Registration[68,167]
BrainSuitehttp://brainsuite.org/ (accessed on 26 August 2022)Skull-stripping[162]
Table 10. Comparative study of evaluation metrics used in this SLR.
Table 10. Comparative study of evaluation metrics used in this SLR.
Evaluation MetricFR/NRDescriptionAssessment MethodRef. No.
Average symmetric surface distance (ASSD)FRASSD measures the average of all Euclidean distances between two image volumes. Segmented image[53,77,168]
Blind/ Reference-less Image Spatial Quality Evaluator (BRISQUE)NRBRISQUE focuses natural scene statistics (NSS) such as ringing, blur, and blocking. It quantifies the reduction of naturalness by locally normalizing the luminance coefficients. Whole image[33,58,86]
Dice Similarity Coefficient (DSC)FRDSC measures the spatial overlap and provides a reproducibility validation score for image segmentation. Segmented image[70,77,79,83,96,99,144]
Frechet Inception Distance (FID)FRThe distance between Gaussian distributions of synthetic and real images is FID or the Wasserstein-2 distance.Whole image[35,59,84,99,120]
Hausdorff Distance (HD)95FRHD measures the maximum Euclidean distance between all surface points of two image volumes. Segmented image[115]
Jaccard similarity coefficient (JSC)FRIt is a value used to compare the similarity and diversity of images recognized as Intersection over Union.Segmented image[98,99,159]
Maximum Mean Discrepancy (MMD)FRMMD measures the dissimilarity between the probability distribution of real images over the space of natural images and parameterized distribution of the generated images. Whole image[32]
Mutual Information Distance (MID)FRMID measures the association between corresponding synthetic images in different modalities. It first evaluates the mutual information of synthetic image pairs and real image pairs and then computes their absolute difference.Whole image[84,99]
Normalized Mean Absolute Error (NMAE)FRNMAE measures the estimation errors of a specific color component between the original and synthetic images.Whole image[33,74,88]
Normalized Mutual Information (NMI)NRNMI expresses the amount of information synthetic images carry regarding the original image. Whole image[140,148]
Normalized Cross-Correlation (NCC)FRNCC evaluates the degree to which the synthetic and original image signals are similar. It is an elementary approach to match two image patch positions. Segmented image[25]
Naturalness Image Quality Evaluator (NIQE)NRNIQE is a distance-based measure of natural images’ divergence from statistical consistency. The metric quantifies image quality according to the level of distortions. Whole image[33,41,58,88]
Peak Signal-to-Noise Ratio (PSNR)FRIt is an expression for the ratio between the maximum possible power of the original image and the power of the generated image. Whole image[48,56,59,73,74,75,85,88]
Structural Similarity Index Measure (SSIM)FRSSIM score indicates the perceptual difference between original and synthetic images. It compares the visible structures in the image such as Luminance, Contrast, and Structure. Whole image[38,59,73,74,75,88]
Root-Mean-Square Error (RMSE)FRIt measures the differences between the predicted value by an estimator and the actual value of a definite variable.Whole image[68,72,75,99,140]
Universal Quality Index (UQI)FRImage distortion is the product of loss of correlation, luminance, and contrast distortion. Whole image[14]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Tavse, S.; Varadarajan, V.; Bachute, M.; Gite, S.; Kotecha, K. A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. Future Internet 2022, 14, 351. https://doi.org/10.3390/fi14120351

AMA Style

Tavse S, Varadarajan V, Bachute M, Gite S, Kotecha K. A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. Future Internet. 2022; 14(12):351. https://doi.org/10.3390/fi14120351

Chicago/Turabian Style

Tavse, Sampada, Vijayakumar Varadarajan, Mrinal Bachute, Shilpa Gite, and Ketan Kotecha. 2022. "A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI" Future Internet 14, no. 12: 351. https://doi.org/10.3390/fi14120351

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop