Next Article in Journal
Weibull Distribution with Linear Shape Function
Previous Article in Journal
Applicability of Shallow Artificial Neural Networks on the Estimation of Frequency Content of Strong Ground Motion in Greece
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

A Review on the Applications of GANs for 3D Medical Image Analysis

School of Computing Technologies, Royal Melbourne Institute of Technology University (RMIT), Melbourne, VIC 3000, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(20), 11219; https://doi.org/10.3390/app152011219
Submission received: 21 August 2025 / Revised: 29 September 2025 / Accepted: 30 September 2025 / Published: 20 October 2025

Abstract

Three-dimensional medical images, such as those obtained from MRI scans, offer a comprehensive view that aids in understanding complex shapes and abnormalities better than 2D images, such as X-ray, mammogram, ultrasound, and 2D CT slices. However, MRI machines are often inaccessible in certain regions due to their high cost, space and infrastructure requirements, a lack of skilled technicians, and safety concerns regarding metal implants. A viable alternative is generating 3D images from 2D scans, which can enhance medical analysis and diagnosis and also offer earlier detection of tumors and other abnormalities. This systematic review is focused on Generative Adversarial Networks (GANs) for 3D medical image analysis over the last three years, due to their dominant role in 3D medical imaging, offering unparalleled flexibility and adaptability for volumetric medical data, as compared to other generative models. GANs offer a promising solution by generating high-quality synthetic medical images, even with limited data, improving disease detection and classification. The existing surveys do not offer an up-to-date overview of the use of GANs in 3D medical imaging. This systematic review focuses on advancements in GAN technology for 3D medical imaging, analyzing studies, particularly from the recent years 2022–2025, and exploring applications, datasets, methods, algorithms, challenges, and outcomes. It affords particular focus to the modern GAN architectures, datasets, and codes that can be used for 3D medical imaging tasks, so readers looking to use GANs in their research could use this review to help them design their study. Based on PRISMA standards, five scientific databases were searched, including IEEE, Scopus, PubMed, Google Scholar, and Science Direct. A total of 1530 papers were retrieved on the basis of the inclusion criteria. The exclusion criteria were then applied, and after screening the title, abstract, and full-text volume, a total of 56 papers were extracted from these, which were then carefully studied. An overview of the various datasets that are used in 3D medical imaging is also presented. This paper concludes with a discussion of possible future work in this area.

1. Introduction

The significant increase in medical imaging procedures and advancements in computational capabilities have driven the rapid evolution of artificial intelligence (AI) in medical imaging. AI plays a critical role across all sorts of medical imaging, regardless of the imaging technique or the organs being examined. AI has the potential to be integrated into every stage of diagnostic imaging, including data acquisition, reconstruction, analysis, and reporting [1]. It significantly influences all the aspects of the daily workflow of radiologists, who are increasingly burdened by rising workloads, inevitably leading to errors due to intense pressure [2]. The advancement of AI in this context is motivated by a desire to improve the effectiveness and efficiency of clinical practices.AI has emerged as a robust tool in image analysis, increasingly adopted by radiologists for early disease detection and minimizing diagnostic inaccuracies in preventive healthcare, and supporting the decision-making process [3,4]. By accurately detecting and segmenting tumors and other abnormal formations in organs, the early detection and precise diagnosis of diseases can be achieved, additionally supporting the development of personalized treatment plans for patients [5,6]. Computer-aided diagnosis for medical image analysis includes several key tasks: segmentation (separating the relevant parts of the image from the background), detection (locating and counting specific areas of interest), denoising (removing irrelevant or noisy pixels), reconstruction (converting lower-dimensional data like 2D images into higher-dimensional formats like 3D), and classification (assigning labels to images based on their content). Each of these tasks plays an important role and poses significant challenges in developing automated systems for medical imaging diagnostics [7,8].
A 3D image constitutes of an image along with its volume component. Viewing the entire volume at once gives a broad perspective, which helps in better understanding shapes and abnormalities that may be unfamiliar, when compared to viewing 2D images only [9]. Working with 3D volumetric medical data is especially challenging because of the complexity [10,11] caused by variations in anatomy, pathology, and imaging techniques. Compared to 2D medical images, 3D images are larger and more complex, making them harder to analyze. As a result, research focused on 3D volumetric data has become increasingly popular among researchers [12].
MRI-obtained images are mostly referred to as 3D or volumetric images. However, MRI machines are not accessible in some regions due to a number of reasons, a few of which are listed below:
  • Their high cost, which involves the price of equipment, installation costs, and maintenance or operation costs [13].
  • Lack of significant infrastructure and space, as MRI machines require a large amount of physical space to be installed [14].
  • Limited access in underfunded or rural regions that can not afford the cost of MRI machines. Another issue is the unavailability of skilled technicians to operate these machines [14].
  • Performing an MRI can be dangerous if there are ferromagnetic objects nearby or if a patient has metal implants [15].
  • Unavailability of large amounts of datasets due to privacy concerns [16,17].
  • Exposure to a high volume of harmful radiation [18].
In the above-mentioned scenarios, since the cost of directly producing multimodal images from medical devices is high [19], a practical solution is the generation of 3D images from 2D images, as this will improve the analysis as well as diagnosis in healthcare settings. Ref. [20] suggests that reconstructing 3D images from X-rays is an ideal method for visualizing a patient’s anatomy for clinical analysis. This approach is cost-effective, widely accessible, and minimizes the patient’s exposure to radiation compared to other imaging techniques. The solution lies in the use of Generative AI models; these are a form of machine learning models that create new data based on patterns and structures they learn from existing data/training data. These generative models have enriched the existing medical datasets in recent years by producing realistic synthetic data [21]. Generative AI has had a significant impact across various fields, including image generation, text creation, music composition, drug discovery, and healthcare [22], and is also very effective in the early diagnosis of underlying conditions [23]. Most popular among these are GANs and Transformers. To visualize the hierarchy and relationship between AI, Machine Learning (ML), Deep Learning (DL), and Generative AI, a Venn Diagram is presented in Figure 1. AI represents the broad field of creating intelligent systems, while ML is a subset of AI that enables machines to learn from data. Within ML, DL is a more specific approach involving neural networks with multiple layers. Generative AI falls under DL and ML, focusing on models that can generate new data resembling the training data. In recent years, Generative AI has received significant attention due to the huge amounts of data and the increasing sophistication of the computing technologies [24,25]. The Figure 1 provides a comprehensive overview with a conceptual hierarchy for the readers, providing a clearer understanding of where Generative AI stands in the overall landscape. It establishes a logical progression towards this paper’s main focus: the use of Generative AI in 3D medical imaging.
While Generative AI comprises different models, including diffusion models and transformers, this systematic review will focus on the role of GANs in 3D medical imaging because of various reasons, some of which include the following:
  • The extensive use, applications, and proven effectiveness of GANs in 3D medical imaging for data generation, super-resolution, denoising, cross-modality translation, segmentation, and reconstruction [26,27,28];
  • GANs offer flexible architecture for handling 3D volumes, including multi-resolution generators and discriminators [29];
  • GANs offer faster computing speeds, the potential for real-time synthesis when applied in clinical settings [30], and greater efficiency compared to other generative AI models, e.g., transformers and diffusion models [31,32].
The GAN was introduced by Goodfellow and his colleagues [33] in 2014, and since then, it has evolved and transformed medical imaging by generating high-quality images, even when datasets are limited, leading to improved diagnostic accuracy and enhanced image quality. GANs stand out for their ability to learn patterns and data, in order to generate images or to translate images to other modalities [34,35,36,37]. GANs are used for the generation of 3D medical images, covering key techniques like anomaly detection, complex data synthesis, denoising, reconstruction, segmentation, classification, and image translation [38,39,40,41,42,43]. Recently, GANs have been used in 3D generation techniques in medical imaging, such as image reconstruction, synthesis, and high-precision tumor detection, to name a few [44].
While the existing surveys [45,46,47,48,49,50,51] explore the role of GANs in 3D medical imaging, they do not offer an up-to-date overview of the advancements of GANs in 3D medical imaging, as they explore the use of GANs for 3D medical imaging only up to the year 2022. A recent survey [52] covers the latest research but only focuses on image enhancement. Therefore, this review offers the first comprehensive synthesis that is exclusively focused on 3D medical image analysis, specifically focusing on the recent research articles from the years 2022–2025 to provide an up-to-date review of the recent advancements in Generative AI in 3D medical imaging. The distinction is critical as 3D medical imaging analysis introduces unique computational, architectural, and clinical challenges. By systematically categorizing 3D GAN models, focusing on the different applications of GANs, the datasets that are commonly used, any pre-processing methods that are applied, the algorithms, their challenges and the findings, evaluating the model’s performance across volumetric tasks, and identifying limitations and challenges, this review fills a crucial gap in the research, making this paper a valuable addition to the existing body of knowledge. This review provides a roadmap for future research and clinical implementation in this rapidly evolving field, helping to push forward the cutting-edge advancements in 3D medical imaging.

1.1. Generative Models for Synthetic Data Generation—Generative Adversarial Networks (GANs)

In medical image analysis, DL models are applied across various tasks such as registration, detection, classification, image-to-image translation, segmentation, and video-based applications. AI consists of the application of artificial neural networks, specifically deep learning techniques like GANs, which have significant implications in radiology. GANs consist of two neural networks: a generator that creates synthetic images resembling real ones and a discriminator that distinguishes between synthetic and real images [46,53,54]. In the context of radiology, the generator model can replicate images consistent with its training data and generate new images with similar features. Meanwhile, the discriminator is trained to classify images. By choosing an appropriate loss function and through repeated training, the generated images become more realistic and closer in distribution to real images. The working of a GAN is illustrated in Figure 2. GANs are widely used for generating synthetic images. The latest advancements in GANs involve a style-based generation approach, where style vectors from a mapping network control the image generation process. Researchers have been drawn to GANs because of their impressive ability to generate images, which has led to their widespread adoption in medical image augmentation.
GANs offer a promising solution for creating synthetic medical images, which helps address the problems of having limited labeled data and imbalanced class distributions. This, in turn, boosts the effectiveness of disease classification and detection models. By leveraging GAN technology, researchers strive to enhance diagnostic processes, treatment planning, and overall patient care through improved medical imaging techniques. However, current methods face challenges like interpretability issues, limited data, overfitting, unstable training, domain adaptation, and ethical concerns. Researchers are thus exploring new GAN architectures and methods to address these issues and improve the quality and reliability of generated medical images.
GANs are used for image-to-image translation, enabling the conversion of medical images from one modality to another, such as transforming CT scans into MRI images or creating synthetic X-ray images from CT data. This is particularly useful when certain imaging modalities are expensive or involve ionizing radiation. GANs also improve image quality by denoising medical images, aiding in accurate analysis. Furthermore, they help in anomaly detection by differentiating between normal and abnormal cases, which supports early disease detection. GANs have revolutionized image enhancement and diagnostic accuracy by producing high-quality images from the limited datasets [28]. GANs play a vital role in tackling imbalanced datasets by generating synthetic samples of rare medical conditions, thereby creating more balanced training sets. Innovative GAN technologies can greatly improve image quality, broaden applications, and evolve the GAN framework. Image generation provides promising solutions for improving the image quality, translating it into other modalities, and also for modeling the progression of diseases [56].

1.1.1. GANs for 2D Medical Imaging

This paper focuses on surveying the use of GANs in 3D medical images;however, a background is provided regarding the 2D medical images since a great deal of research and advancement has been done on 2D medical imaging, and also numerous 2D imaging GAN models lay a foundation for 3D medical imaging GAN models, both architecturally and conceptually. GANs generate realistic synthetic images, which is especially helpful in expanding small labeled datasets, enhancing image quality by creating higher-resolution versions of lower-quality images, improving diagnostic accuracy, and enabling cross-modal image conversions [57,58]. Overall, the realistic and varied images produced by GANs are helping to improve image analysis, segmentation, and clinical decision-making in healthcare [59,60]. GAN models for 2D medical images have become a stepping stone for GANs that deal with 3D medical images, since the demand for 3D medical images is increasing as they offer a clear three-dimensional overview of the whole volume to better understand the unfamiliar shapes or abnormalities leading to improved disease diagnosis, as well as better patient monitoring and treatment planning [51,61].

1.1.2. GANs for 3D Medical Imaging

While 2D images are useful for many purposes, 3D images offer greater insight into the shape and structure of tumors. Understanding the 2D and 3D geometry of a tumor is crucial for assessing its growth patterns, which can aid in improving surgical planning and drug delivery strategies. GAN-based methods often face challenges such as memory limitations and stability problems. As a result, most GAN models have been trained on low-resolution 3D images [62,63,64]. It is only recently that they have been able to generate full-resolution images by learning smaller parts, or sub-volumes, of the image [65]. Three-dimensional generative models require an extremely long training time due to the large number of parameters, features, and model complexity. The segmentation and visualization of 3D medical images allows for a better understanding of the condition and helps with better treatment planning [5].
The progress in 3D medical image research is slow, mainly because of the lack of large-scale 3D medical image datasets [66]. This scarcity is due to the complexity involved in data collection, the need for expert annotation, privacy issues, and obtaining patient consent. To address this, GANs have been widely used to generate synthetic images that replicate real medical data. However, most GAN-based methods focus on 2D image generation. When it comes to 3D medical imaging, there are two major challenges:
  • Insufficient availability of 3D medical images to train effective models, due to high annotation costs, patient consent issues, and the difficulty of expert annotations, making it hard to train 3D medical models effectively [67];
  • The use of 3D convolutional layers introduces a large number of parameters, slowing down the training process and increasing the risk of overfitting because the number of parameters is disproportionately large compared to the small dataset size [68];
  • Three-dimensional modeling for medical imaging is computationally intensive, as it requires long training hours and significant memory and hardware due to the complexity of generative architectures and volumetric data [68].

1.2. Systematic Review Objectives

In this review, intensive research has been carried out on the existing literature. Past reviews give valuable insights into the 3D medical imaging sector up until the year 2022 [45,46,47,48,49,50,51]. A recent survey [52] covers the latest research works, but its main focus is on image enhancement only, overlooking the broader coverage of GANs across multiple image processing tasks like 3D image generation, segmentation, reconstruction, and clinical translation. A research gap lies in exploring the works that have been published from 2022 till now regarding 3D medical imaging that covers generation, segmentation, enhancement, translation, and reconstruction. It is important to study new research from after the year 2022 for a number of reasons:
  • The field relating to GANs in 3D medical imaging has exponentially evolved since 2022, introducing some new models, increased clinical applicability, and new tasks;
  • Earlier surveys may be outdated now, since there has been significant advancement in this field in recent years;
  • Some breakthrough 3D GAN models that efficiently enhance the image fidelity have also been introduced since 2022, such as multi-resolution GANs [27], memory-efficient CRF-Guided GANs [69], and hybrid frameworks like HA-GAN [70], to name a few.
Therefore, this systematic review focuses on the existing research (published between 2022 and 2025) that covers the multiple medical image processing tasks, i.e., generation, segmentation, reconstruction, translation, and enhancement. The older studies are consciously excluded, and only the recent studies are included to avoid any sort of redundancy. The innovations in GAN architectures, their evaluation strategies, and clinical applicability are highlighted, offering a comprehensive and comparative analysis. By focusing on the latest advancements in GANs and their multiple imaging application tasks, this systematic review provides a comprehensive resource for researchers and clinicians who are navigating this fast-evolving field.
To facilitate the readers’ and researchers’ understanding, the following questions form the primary focus of this work:
  • What are the applications of GANs for generating 3D medical images?
  • Which methods are most common and most effective for this purpose?
  • What datasets have been used for each work?
  • What evaluation metrics have been used for comparisons and results?
  • Are there any pre-processing techniques used?
  • What is the accuracy and the limitation of each method?
  • What could be the possible future work to pursue with this technology?
While this review focuses exclusively on the use of GANs for 3D medical image analysis, we acknowledge the impact of diffusion models in 3D medical imaging. We encourage the researchers and readers seeking broader coverage to consult emerging reviews dedicated to diffusion model methods.

2. Survey Methodology

In this review, a thorough and comprehensive search is carried out following the PRISMA guidelines [71], which are established standards for conducting and reporting systematic reviews and meta-analyses.

2.1. Databases and Search Strategy

A comprehensive literature search was carried out across these scientific databases: IEEE, Science Direct, Google Scholar, Scopus, and PubMed. All the relevant studies that were published between 2022 and 2025 were collected by using targeted search queries. The sorting was based on relevance and citation count, where feasible. The search tags included “Generative AI”, “3D Medical Imaging”, “Three-Dimensional Imaging”, and “GANs”. The database and search strategy are summarized in Table 1.
This approach and these databases are sufficient for achieving the aims of this paper for the following reasons:
  • Limiting the search years to 2022–2025 ensures that the latest developments in this field are captured. This time window aligns with the emergence of benchmark datasets like CT-ORG, UDPET, VerSe, and GLIS-RT, and also the use of hybrid GAN architectures.
  • The targeted tags, “Generative AI”, “3D Medical Imaging”, “Three-Dimensional Imaging”, and “GANs”, ensure that the retrieved studies align with the concept of this systematic review paper.
  • The databases used are IEEE, Science Direct, Google Scholar, Scopus, PubMed. These provide influential, relevant, and cutting-edge research papers, thus offering a comprehensive view of advancements in the generative AI field.
  • Highly relevant studies are prioritized, which helps to ensure that impactful research is selected.
Anomaly Detection Screening: While anomaly detection is an advancing area, using targeted queries “anomaly detection”,“3D GANs”, and “3D medical imaging” between the years 2022–2025, some publications were found; however, none were found that combined both GANs and 3D medical imaging for anomaly detection. Therefore, no 3D GAN anomaly studies are included.
For a detailed analysis of the retrieved studies regarding the Database and Search Strategy, date of search, search tags, and hit count, refer to Supplementary Table S1.

2.2. Eligibility Criteria

This section discusses the inclusion criteria (Section 2.2.1), exclusion criteria (Section 2.2.2), and risk of bias (Section 2.3) for the publications.

2.2.1. Inclusion Criteria

The selected studies were based on the chosen topic and research questions, meeting the following key requirements:
  • Involve the use of Generative AI: GANs.
  • Focused on 3D or volumetric medical imaging.
  • Provide enough information to answer at least one of the research questions, which are listed in Section 1.2.
  • The publications chosen were mainly conference papers and journal articles to ensure methodological rigor and peer-reviewed quality.
  • To ensure the most current trends and methods, the studies chosen were published between 2022 and 2025.
  • Only the studies published in the English language were selected.
  • Only the studies with full-text accessibility were included.
The studies that explicitly implemented 3D volumetric GANs were included. Meanwhile, 2.5D, patch-wise, or stack 2D were excluded unless they performed full-volume synthesis or evaluation. To highlight true 3D innovation, borderline cases, e.g., pseudo-3D stacking, were excluded.

2.2.2. Exclusion Criteria

Studies with the following characteristics were not selected:
  • Studies focusing on 2D medical images only;
  • Duplicate entries;
  • Extended abstracts;
  • Studies in languages other than English;
  • Articles that were not relevant to this study, where relevance is determined by reference to the medical imaging domain, such as image generation, segmentation, reconstruction, or transformation to other modalities;
  • Using GAN models for processing;
  • Publications that work on 3D medical imaging or 3D/2D medical imaging;
  • Articles related to non-medical study;
  • Studies that were published before 2022 were excluded;
  • Studies that were reviews.
All the retrieved studies had their duplicates removed, the titles and abstracts were screened, followed by full-text screening to assess the research’s eligibility for inclusion in this systematic review. Any disagreements were resolved through discussion between the authors. The final selection of the retrieved studies resulted in a total of 56 research papers, as illustrated in the PRISMA flow diagram in Figure 3.

2.3. Risk of Bias

The risk of bias assessment was conducted for this systematic review using the Risk of Bias in Systematic Reviews (ROBIS) tool [72], focusing on the four main domains: (1) eligibility criteria of the studies, (2) identification and selection of studies, (3) data extraction and outcome evaluation, and (4) results and interpretation of findings. Each domain was rated with ROBIS response options (Yes, Probably Yes, Probably No, No, No Information), allowing for the final classification of risk of bias (high, low, or unclear). If the outcome of all the domains is Yes or Probably Yes, then the overall risk of bias of the review is taken as Low, if any of the domains is marked with No or Probably No, then the overall Risk of Bias is set as High, and if there is insufficient information present for judgement only then the Risk of Bias in the review in marked as Unclear [72]. The outcome of the ROBIS risk of bias assessment is summarized in Table 2.
For this systematic review, the overall risk of bias is assessed as ’low’ because a transparent methodology was used for the selection of literature, data extraction, and the interpretation of results. This review clearly abides by the inclusion/exclusion criteria, with publications extracted from reputable databases, and limited to only the recent studies from the years 2022–2025. This enables the reduction of the selection and reporting bias. There are some research papers that have been marked as high or unclear due to some limitations. The unclear papers had insufficient detail on evaluation metrics, characteristics of the used model, or the dataset used. The high-risk papers used private datasets or had no quantitative comparisons.

3. Results of Survey

The database search returned a total of 1530 papers, as illustrated in Figure 3. Considering the eligibility criteria from Section 2.2, the duplicates were removed, followed by title screening, abstract screening, and full-text screening. A total of 56 publications were retrieved for detailed review regarding the use of generative AI in 3D medical imaging, which are presented in Section 3.4, after careful review.
Based on the collected publications, this review has been organized to elaborate these research works from several perspectives including medical image modality (Section 3.1), medical applications (Section 3.2), 2D medical images (Section 3.3), 3D medical images (Section 3.4), public datasets (Section 3.5), code availability (Section 3.6), and evaluation metrics (Section 3.7). Table 3 is provided for the ease and better understanding of readers and researchers, to help familiarize them with how each section contributes to the aims of this survey. This table aims to provide a better overview to the readers to see what is most relevant to them.

3.1. Medical Image Modality

In this section, the modality of medical images used in the reviewed publications is summarized and presented in Figure 4. The majority of the reviewed studies used MRI and CT, a potential reason for this is the greater number of publicly available datasets. In this section, the various medical image modalities used will be discussed.
  • MRI: Among all the reviewed papers, the most popular image modality is magnetic resonance imaging, or MRI, which covered 42% of the publications. MRI utilizes strong magnetic fields and magnetic field gradients along with radio waves to capture images of organs [126]. It is a non-invasive and radiation-free imaging technique, providing promising results [127,128].
  • CT: In total, 42% of the reviewed publications used computed tomography (CT) image modality, which is equally as popular as the MRI. CT scan involves radiation exposure, using X-rays to generate high-resolution cross-sectional images of the body [129,130].
  • PET: PET or positron emission technology was used in 5% of the reviewed papers.
  • X-ray: About 3% of the reviewed papers focused on the use of X-ray.
  • TOF MRA: TOF MRA was used in 3% of the reviewed publications.
  • Ultrasound: In total, 3% of the reviewed papers worked with ultrasound images.
  • Echocardiography: Echocardiography was used in 2% of the reviewed papers.
Most of the research work focuses on MRI and CT scans since they help produce high-resolution 3D volumes, with MRI offering rich soft-tissue contrast [131] and CT providing vivid information regarding bone and density [132]. Their dominance in the literature reviewed is because their publicly-accessible datasets are easily available, which are also standardized (e.g., BraTS, LIDC, UK Biobank), making them suitable for training GANs [133]. MRI and CT scans are predominantly used for diagnosis and treatment planning in oncology, neurology, and cardiology, which are areas of strong research interest [134]. This could potentially lead to the other modalities lagging behind in AI advancements.

3.2. Medical Applications

In this section, we have summarized the medical applications that have been focused on in the reviewed papers. This distribution is illustrated in Figure 5.
Three-dimensional image generation: About 58% of the reviewed publications performed experiments only on the generation of three-dimensional medical images. One of the major challenges in applying deep learning in medical research is the limited availability of data [135,136]. GANs can generate entirely new data that does not correspond to any actual person; this helps in avoiding privacy and anonymity concerns in clinical applications. Traditional data augmentation methods, such as the translation, rotation, scaling, and flipping of existing samples, are commonly used to create synthetic data, which face limitations in the medical field because medical images cannot be significantly altered in terms of their shape or color, as it can affect the anatomical features and lead to misdiagnosis and inaccurate treatment planning. Therefore, GANs overcome this by producing entirely new scans that closely resemble real patient images, allowing for the expansion of datasets without distorting the original medical data [137,138].
Three-dimensional image segmentation: The second most common task performed in these reviewed papers was the segmentation of three-dimensional medical images, accounting for 29%. Image segmentation helps in delineating the pathological regions, actioning image-guided interventions, and surgical planning [139].
Three-dimensional image reconstruction and segmentation: In total, 9% of the reviewed publications focused on both the medical image generation and segmentation.
Three-dimensional image transformation: Three-dimensional image transformation was performed in 2% of the reviewed publications. GANs can convert images from one modality to another, such as transforming CT images into MR or PET images, a process known as cross-modality synthesis. They can also generate new images within the same modality, like converting MRI images from T1-weighted sequences to T2-weighted ones, a form of image-to-image translation, which helps in significantly reducing acquisition times, lowering radiation exposure, and preventing patients from undergoing multiple scans [140]. Another important process is denoising, where GAN architectures, such as conditional GANs, CycleGANs, and super-resolution GANs (SRGANs), have been successfully applied to reduce noise in low-dose CT (LDCT) scans [141]. Low- to high-dose conversion, where GANs are employed to enable safer imaging protocols to generate enhanced image reconstructions that retain structural integrity, has supported clinical decision-making without compromising patient safety [142,143].
Three-dimensional image enhancement involves tasks like image quality improvement, super-resolution, and artifact correction.
Table 4 shows the arrangement of these reviewed publications regarding the medical applications performed.

3.3. Literature Survey Regarding 2D Medical Imaging

This review paper focuses on GANs in 3D medical imaging; however, it is very important to provide a brief insight into the 2D GAN models, to establish a technical and conceptual background, and help better understand the recent advancements of GANs in 3D medical imaging.
  • Much research has been carried out on the use of GANs in 2D medical imaging, and a few state-of-the-art research studies are the foundational influence from which the later research on 3D imaging has been derived. For example, the 2D models CycleGAN, pix2pix, and StyleGAN were extended and adapted to 3D GAN architectures. These 2D imaging models were altered with a few modifications and the addition of a third component/dimension; these were used to generate volumetric images.
  • The evolution of 3D GANs can be contextualized by understanding these 2D architectural roots.
  • Two-dimensional GANs offer insights into the data augmentation, training stability, and also evaluation metrics that are helpful for three-dimensional implementation.
Figure 6 represents a taxonomy of the use of GANs in 2D medical imaging, explaining the applications, modalities, basic structure, and training strategies.
Alqushaibi et al., 2024 [144] use Pix2PixGAN augmented with Attention U-Net generator and a PatchGAN discriminator, guided by a sine cosine algorithm (SCA) for enhancing the selection of optimal hyperparameters in GANs for the synthesis and segmentation of medical images, outperforming baseline methods in terms of Dice, IoU, similarity index, and MAE.
Abdollahi et al., 2023 [145] use GAN and incorporate vision transformers (ViT). The model is trained end-to-end in two stages: super-resolution (to reconstruct high-resolution image from low-resolution input) and realistic modality translation (to map images between different domains). The model provides a framework for medical image enhancement with perceptually realistic and detailed outputs, but lacks comparison with other methods.
Liu et al., 2023 [146] use MVI-Wise GAN, which works on liver CT-MR pair images to train and then can generate MRIs using CT images, eliminating the requirement for liver MRIs that use CA injections. The performance results show better FID and KID scores than baseline GAN models, with a tumor detection accuracy of 92.3% in the synthetic images. This can be implemented on 3D medical imaging by taking the dimensions to another level by adding a volume component.
Tripathi et al., 2023 [147] use U-Net segmentation to extract a segmentation mask, followed by DCGAN to generate synthetic images (using both this extracted mask and noise as input). The synthetic mammograms closely resemble real mammograms in terms of visual appearance and relevant features, thus claiming to provide the research community with diverse and realistic synthetic mammograms, transcending data scarcity.
Ju et al., 2024 [70] lay the foundation of a hybrid augmented generative adversarial network (HAGAN), which contains three modules: Attention Mixed (AttnMix) Generator, Hierarchical Discriminator, and Reverse Skip Connection between Discriminator and Generator. HAGAN demonstrates the best FID score compared to six other baseline methods.
Yu et al., 2023 [148] uses a Fuzzy Self-Guided Structure Retention GAN (FS-GAN), which includes a Self-Guided Structure Retention Module (SSRM) that enables the generator to learn features better and retain the correct neural fiber structure. Additionally, an Illumination Distribution Correction Module (IDCM) regulates the illumination distribution of the enhanced image, making it more consistent with human perception. The comparison with traditional methods CLAHE and DCP, and six DL methods NST, MSG-Net, EnlightenGAN, CycleGAN, StillGAN, and SynDiff shows that FS-GAN achieves the best results in AvG, Brisque, and PIQE.
Onakpojeruo et al., 2024 [149] use DCGAN to synthesize data, then grid-search optimization strategy is used for a Conditional Deep Convolutional Neural Network (C-DCNN) model for a classification task to detect tumors and differentiate brain tumors from kidney tumors. A thorough comparison of the performance of the novel C-DCNN model with SOTA models is carried out, showing that the proposed model achieved accuracy, precision, recall, and F1-scores of 99% on both synthetic and augmented images, outperforming the comparative models. The success of synthetic data in improving classification performance suggests potential applications in other medical imaging tasks, potentially revolutionizing disease prediction and diagnosis.
Alauthman et al., 2024 [150] use Boruta feature selection followed by LS-GAN for augmentation, with the primary goal of generating synthetic data that closely resembles the real data. This GAN-based augmentation improved the stability and generalization of the classifiers for classification applications, reducing overfitting issues commonly associated with small datasets.
Sravani et al., 2024 [151] use a GAN with an Adam Optimizer, followed by post-processing, where a Gaussian kernel is applied to smooth the generated images for improved resolution (ESRGAN). The real and generated images are compared using SSIM score, resulting in a promising solution to generate synthetic medical data instead of preparing original medical data, particularly for brain tumor diagnosis.

3.4. Literature Survey Regarding 3D Medical Imaging

A summary of the distribution of the total publications in accordance with the years they were published in and the medical imaging applications they perform is outlined in Figure 7. The distribution of these publications according to the datasets used is represented in Figure 8, and their distribution according to the image modalities used to perform the imaging applications is represented in Figure 9.
3D Image Generation: The publications featuring 3D medical image generation are explained. Kim et al. [73] propose a 3D data-guided generative adversarial network (3D-DGGAN) where features (reference codes) are extracted from CT/MR images, which are then validated through a decoding process to ensure their accuracy. These reference codes are combined with Gaussian noise and passed through the generator to create 3D images. The discriminator is divided into 3 components: volume, slab, and slice. The volume discriminator analyzes the entire set of 3D image slices, the slab discriminator focuses on consecutive slices, and the slice discriminator evaluates individual randomly selected slices. This combination of discriminators allows for detailed evaluation of both specific slices and the continuity between adjacent slices. The volume discriminator enhances the ability to capture fine details in 3D images, leading to higher fidelity in the generated images. However, no analysis for memory usage is provided.
Sun et al. [74] claim that previous works that used 3D GANs generated low-resolution images (128 × 128 × 128 or smaller) due to memory constraints during training, so the authors propose a Hierarchical Amortized GAN (HA-GAN), where different configurations are used for training and inference. During training, HA-GAN generates both a low-resolution image and a randomly selected sub-volume of a high-resolution image. An encoder is used to extract features from images and stabilize training, preventing mode collapse. This sub-volume approach reduces memory requirements while preserving fine details in the high-resolution image. The low-resolution image ensures that the overall anatomical structure is consistent. During inference, the full high-resolution image can be generated without the need for sub-volume selection. The addition of a low-resolution branch helps the model learn the global structure, while the encoder improves performance. HA-GAN produces sharper images compared to other baseline methods, particularly at a higher resolution of 2563. Some limitations apply, as this architecture could be expanded to other imaging modalities, and removing blank axial slices of training images could reduce the gap between the generated and real images.
Hwang et al. [76] uses a GAN incorporating CutMix and GRAF. CutMix involves cutting and mixing knee MRI image patches to create diverse training samples. Two generators and two discriminators were used for translating images between X-ray and MRI. Neural Radiance Fields (NerF), known for generating high-fidelity 3D images, model the radiance field and depth of a scene to capture fine details in 3D space. GRAF, a hybrid of NerF and GANs, is used to generate 3D MRIs with enhanced realism by combining volumetric scene representation with GAN-based refinement. The use of cycle consistency loss helps minimize information loss during translation; CutMix enhances the model’s ability to differentiate between the knee and background. However, this study can be extended to other organ images.
Liu et al. [67] introduce a 3D Split-and-Shuffle-GAN, which incorporates StyleGAN, designed to efficiently generate high-quality 3D medical images. The training strategy uses the available 2D image slices to train a 2D GAN model; the 2D weights are then inflated to initialize the 3D GAN model to generate detailed 3D images. The channel Split-and-Shuffle modules are also introduced, which reduce the number of parameters in both the generator and discriminator networks while maintaining performance. These modules help avoid overfitting, especially due to limited 3D medical data. Experiments with five different weight-inflation strategies are performed, and network designs on both the heart (COCA) and brain (ADNI) datasets confirm that this method produces diverse, high-quality 3D medical images. This model outperforms other baseline methods significantly on FID (across axial, sagittal, and coronal planes), PSNR, and MS-SSIM. t-SNE shows a similar distribution to real images, confirming that this method produces diverse, high-quality 3D medical images. A possible limitation can be the exploration of network weight initialization strategies beyond inflation.
Hu et al. [77] propose a hierarchical shape-perception network (HSPN), designed to reconstruct 3D point clouds (PC) from a single incomplete MRI image, specifically for brain surgery applications. HSPN consists of an encoder–decoder architecture, where the encoder, which uses a predictor based on GAN combined with PointNet++ blocks, extracts features, and the decoder, which has multiple layers, rebuilds the 3D shape. A hierarchical attention pipeline is employed to transfer feature information between the encoder and decoder stages. The generator, which contains multiple graph convolutional networks (GCNs), refines the incomplete point clouds, while a discriminator similar to WGAN-GP ensures the accuracy of the generated 3D structures. This model is designed for real-time feedback, to enable surgeons to quickly receive critical 3D information about local brain structures. HSPN outperforms other models in terms of visual quality, quantitative analysis, and classification performance, as evaluated by Chamfer distance (CD) and point-cloud-to-point-cloud (PC-to-PC) error. This study can be advanced beyond brain MRI.
Rezaei et al. [79] employ GAN in three stages: lung segmentation, tumor segmentation, and 3D lung tumor reconstruction. Lung segmentation is performed using snake optimization, which also reduces the dimensions. Tumor segmentation is achieved using Gustafson–Kessel (GK) clustering. For the reconstruction phase, features are first extracted from 2D CT scans of tumors using a pre-trained VGG model, and are then fed into a long short-term memory (LSTM) network, which outputs compressed features. These compressed features are used as input for the GAN generator, which reconstructs the 3D image of the lung tumor. Transfer learning from 2D images is also employed, which speeds up the training process. A possible limitation is the application of this model to other medical applications, such as COVID-19 diagnosis.
Safari et al. [81] use MedFusionGAN, which is an unsupervised GAN designed for medical image fusion. Its goal is to merge CT scans, which capture bone structures, with high-resolution 3D T1-Gd MRI, known for soft tissue contrast. This fusion produces images that better delineate tumor regions and reduce the time required for radiotherapy planning. MedFusionGAN employs a generator to blend the information from MRI and CT images and a PatchGAN discriminator to distinguish between the original and fused images. This model outperformed other approaches and enhances treatment accuracy by reducing the radiation exposure to healthy organs, improving auto-segmentation algorithms, radiotherapy planning, and tumor delineation. This can be extended to other organs to fill a possible gap.
Tudosiu et al. [84] propose a model using VQ-VAE and transformer, using Voxel-Based Morphometry (VBM) and Geodesic Information Flows (GIF), which demonstrates that the synthetic data retains the morphological features of real data. The model uses VQ-VAE, which compresses high-resolution images into a latent space, and a transformer that captures relationships within the compressed representations. Compared to a baseline VAE model, the proposed VQ-VAE significantly outperforms by generating realistic brain images. The model captures both healthy and diseased brain structures accurately. Future work should focus on adding conditioning mechanisms, enhancing diversity, and extending the model to include disease progression and privacy preservation features.
Poonkodi et al. [87] propose 3D-MedTranCSGAN, a 3D medical image transformation system that combines non-adversarial loss components with a Cyclic Synthesized GAN (CSGAN). This model uses 3DCascadeNet in the generator that refines image transformations by combining encoding-decoding pairs (which improves the visual output) with skip links for better resolution and smoother results, and also PatchGAN’s discriminator to assess the difference between the original and synthesized images while calculating non-adversarial losses such as content, perception, and style transfer losses to enhance the perceptual quality of transformed images. The 3D-MedTranCSGAN performs multiple tasks without modifying its core design, such as transforming PET to CT images, reconstructing CT to PET, correcting motion artifacts in MR images, and denoising PET images. The model’s performance was tested on various tasks and compared with other GAN-based models like pix2pix, PAN, Fila-sGAN, CycleGAN, and MedGAN, consistently outperforming them in accuracy and quality. The model’s outputs are not intended for clinical diagnostics, as they don’t provide significant investigative information. Instead, the model is more suited for post-processing tasks. Future improvements could include incorporating hinge loss and Wasserstein loss to enhance adversarial training and exploring the use of transformed images for diagnostic tasks.
Jung et al. [88] present a novel cGAN with a 3D discriminator that uses an attention-based 2D generator to create realistic 2D image slices, a 2D discriminator to ensure these slices meet the target condition, and a 3D discriminator to evaluate the continuity and structural coherence of consecutive 2D slices, simulating a full 3D volume. This allows the model to account for 3D structure without the computational burden typically associated with 3D cGANs. The 3D discriminator checks groups of 2D slices generated in the same mini-batch to ensure continuous and consistent output across all directions. Future work can apply this model to earlier time points in longitudinal datasets to examine its accuracy in predicting brain deformations in specific subjects. It can be extended to multiple datasets, including OASIS, and analyze the entire brain region, including subcortical structures, to detect broader brain deformations.
Aydin et al. [90] adapt the StyleGANv2 architecture to work with 3D data to generate synthetic Time-of-Flight Magnetic Resonance Angiography (TOF MRA) volumes of the Circle of Willis (CoW), highlighting the potential of this approach for broader medical imaging applications. This model generated realistic and diverse TOF MRA volumes of CoW when analyzed visually.
King et al. [91] use α -SN-GAN, consisting of 3D DCGAN with spectral normalization regulation and an additional encoder. Spectral normalization regularization counteracts the vanishing gradients problem that occurs for small sample sizes. The encoder alleviates mode collapse. This model is named α -SN-GAN. It produces synthetic images with the highest level of quality and variety, as demonstrated through both visual assessments and numerical evaluations. These synthetic images improve the accuracy of the diagnostic classifier.
Zhou et al. [92] introduce 3D Vector-Quantization GAN (3D-VQGAN) with a transformer using masked token modeling to generate high-resolution, diverse 3D brain tumor ROIs, which are used to enhance the classification of brain tumors. The model combines CNN with an auto-regressive transformer, preventing mode collapse and enabling high-resolution image generation. The CNN-based autoencoder extracts local features and the transformer captures long-term interrelations, resulting in competitive performance and improved classification across different brain tumor types. The authors highlight that the synthetic data generated can be directly used in tumor classification tasks, validating the superiority of their method. This approach achieves significant performance improvements, surpassing baseline models.
Zhou et al. [94] present 3D-VQGAN-cond using a class-conditioned masked transformer. This framework generates high-resolution and diverse 3D ROIs of brain tumors for both low-grade and high-grade gliomas (LGG/HGG). A temporal-agnostic masking strategy is used to help the model learn relationships between semantic tokens in the latent space. To generate the ROIs, they start with a class token (0 for LGG or 1 for HGG) and have the transformer complete the remaining indices. The generated LGG and HGG ROIs from this method are compared with baseline models, training a classification model to confirm that the proposed 3D-VQGAN-cond model improves the ability to distinguish between LGG and HGG tumors, achieving better results compared to baseline models.
Corona et al. [95] propose a Swin UNEt-TRansformer (Swin UNETR) that operates by concatenating 2D views into higher-channel 3D volumes, which turns the 3D reconstruction task into a straightforward 3D-to-3D generative modeling problem, avoiding more complex approaches, retaining key information from the 2D inputs, which are passed through Swin UNETR backbone, and uses neural optimal transport for fast, stable training. This approach integrates signals across multiple views without requiring precise alignment, producing 3D reconstructions with limited training. Compared to other models, Swin UNETR produced the highest-quality outputs. The method treats the 2D-to-3D task as a 3D-to-3D problem, improving the correlation between the generated 3D volumes and the 2D inputs. However, this method performs a non-linear transformation in a single step, which introduces uncertainty and blurriness in the outputs. So an iterative, multi-step approach, such as a probabilistic diffusion model, could improve performance. Additionally, while the method is somewhat invariant to input alignment, optimizing the alignment, especially for high-frequency details, could enhance accuracy.
Kim et al. [97] propose Volumetric Imitation GAN (VI-GAN), with a generator incorporating 3D U-Net and ResNet framework for feature extraction and up-sampling, and a 3D convolution-based Local Feature Fusion Block (LFFB) to handle features at multiple scales. The discriminator uses 3D convolution and assesses how closely the generated volumes match real anatomical data. VI-GAN reduces errors in the relationships between neighboring slices, producing more accurate tomographic images that closely resemble the ground truth, particularly for complex anatomical structures like the lumbar vertebra, hip bone, and liver, outperforming other methods across various metrics.
Sun et al. [98] propose DU-CycleGAN, with a U-Net generator and a U-Net-like discriminator, incorporating an encoder and decoder, ensuring a pixel-by-pixel correspondence between input and output image. These two components are connected via skip connections, which improve the resolution of the discriminator. A Content-Aware Re-Assembly of Features (CARAFE) is incorporated into both generator and discriminator, consisting of two parts; one part calculates a reassembly kernel based on the content of specific target areas, while the other part recombines features using those kernels. By stacking adjacent 2D slices into a 2.5D slice, the model captures 3D information without needing the memory-heavy 3D convolutions typically required for 3D image generation. DU-CycleGAN excels in both 2D and 3D image generation, and the model is capable of converting MRI images to CT images using poorly matched data pairs and still produces better results compared to existing methods.
Mensing et al. [99] propose a GAN model largely based on FastGAN. Linear conditioning is applied in both the generator and discriminator. The discriminator reduces feature map resolution by a half using strided convolutions and applies the Leaky ReLU activation function after each convolution. Both the generator and discriminator utilize Skip-Layer-Excitation layers, which connect blocks at different network depths, aiding in error propagation to the earlier layers of the model. This model outperforms 3D-StyleGAN.
Chithra et al. [106] combine different GANs (DCGAN, Pix2Pix GAN, and WGAN) with style transfer techniques. The synthetic MRI images produced by GANs are passed into a style neural network, which applies texture-based transformations. The original 3D MRI images from the dataset are input as content images to the style neural network. This network uses a pre-trained VGG model with 16 layers and 5 pooling layers to transfer textures from one image to another while preserving the core information of the original image. This technique produces synthetic MRI images with high accuracy.
Gao et al. [107] introduce 3DSRNet using GAN, where the generator uses an encoder–decoder network architecture. The model integrates a CNN and a transformer model in a framework called Spine Reconstruction (SRCT). The CNN focuses on capturing the detailed surface information of spine, while the transformer captures the global structure of the skeleton, for more accurate spine reconstruction. The texture extraction (SRTE) method is used to capture low-level texture details from the spine images, improving the model’s ability to reconstruct the 3D spine, outperforming many existing algorithms, making it a useful tool for assisting orthopedic surgeons during diagnosis.
Xue et al. [109] propose a Classification-Guided GAN with Super Resolution Refinement (CG-3DSRGAN), comprising three components: a multi-task reconstruction network (ML-Net), which uses a 3D U-Net structure for image reconstruction, with an additional classification head sharing the same encoder, to generate an initial prediction of the synthetic PET image along with dose reduction level. A super resolution network (Contextual-Net) refines the initial result to preserve high-dimensional features and contextual details. A discriminator based on pix2pix architecture using 3D operations is used to verify the authenticity of the refined image. CG-3DSRGAN generates high-quality synthetic PET images with reduced tracer doses, making it a valuable tool for enhancing PET imaging in clinical practice.
Zhang et al. [110] propose a Pyramid Transformer Network (PTNet3D), which incorporates transformer/performer layers, skip connections, and multi-scale pyramid representation. The transformer block, used in the bottleneck layer, leverages self-attention to capture global dependencies across the latent features. The performer-based encoder (PFE) and decoder (PFD) reduce computational complexity, enabling the model to handle high-resolution 3D blocks. The pyramid representation layer avoids information loss by retaining fine structural details of the brain, resulting in better accuracy and efficiency in MRI synthesis tasks.
Pradhan et al. [111] pre-process the input data for noise removal, resampling, rescaling, and normalization, followed by augmentation. Noise includes air, soft tissues, and fat. Resampling and rescaling convert images to a uniform format. Normalization is used to standardize pixel values, optimize the learning process, and reduce the computation costs. Data augmentation techniques, such as angle and axis rotations, are applied to expand the dataset. A customized CGAN is proposed, inspired by the U-Net architecture, with a three-path design, comprising contracting, bottleneck, and expanding paths. The model can predict views from all angles (0° to 360°), providing a comprehensive 3D representation of bones and joints.
Xia et al. [112] propose a collaborative consent GAN named AwCPM-Net, with a generator with two branches—one for cardiac phase retrieval (CPR) and the other for membrane border extraction (MBE)—and a dual-task discriminator that evaluates both tasks together. The CPR branch uses a self-supervised learning approach, doesn’t require labeled data, and focuses on detecting inter-frame deformation fields to identify cardiac phases. The MBE branch is semi-supervised and handles 3D segmentation, requiring fewer annotations than traditional models. These two tasks are connected through a warming-up connection to mutually enhance the other’s performance. AwCPM-Net simultaneously performs both CPR and MBE in a single step. GAN improves dual-task learning by aligning predicted outcomes with ground-truth data using high-dimensional feature matching. This model outperforms existing CPR methods in capturing motion signals and cardiac phases and excels in detecting arterial wall structures better than current MBE techniques. The reconstructed 3D artery anatomy allows for accurate localization and assessment of vessel stenosis.
Xing et al. [120] propose DP-GAN+B to reconstruct 3D CT volumes from 2D X-ray images. This network uses an encoder–decoder structure, where extracted features are integrated and processed through a novel sampling decoder to obtain the 3D CT output, outperforming its baseline model. Fujita et al. [121] use two GAN-based models, CycleGAN and X2CT-GAN, to generate 3D CT images from X-ray images, successfully achieving the 3D reconstruction and exhibiting exceptional PSNR and SSIM. Touati et al. [122] use dual CT-synthesis GAN model, composed of a dual branch 2D and multi-planar generator network integrating dual feature representation learning, and a discriminator network. This model not only considers 2D query image features, but also captures the 3D information by modeling different 2D planar views of the volumetric input data.
Bazangani et al. [124] propose a separable convolution-based Elicit GAN (E-GAN), where the discriminator is a 3D fully convolutional network and the generator features an encoder and decoder. The encoder uses two major components: Elicit network (extracts spatial information from FDG-PET) and Sobel filter (detects edges and boundaries between different tissues). This model produces a high-quality 3D T1-weighted MRI exhibiting good performance compared to SOTA methods.
Three-Dimensional Image Generation and Segmentation: The publications featuring 3D medical image generation and segmentation are explained. Prakash et al. [75] propose SculptorGAN with Weight Pruning U-Net (WP-UNet), where 3D images are reconstructed from 2D slices, after processing and interpolation, maintaining spatial and contextual continuity through weight pruning. Kidney and kidney tumors are segmented using another WP-UNet model specifically tuned for this task. This model performs voxel-wise classification, extracting detailed features from the reconstructed 3D images. This approach leads to a 35% reduction in reconstruction time and a 20% improvement in segmentation accuracy, setting a new benchmark in medical imaging by improving both 3D reconstruction and segmentation precision, demonstrating the potential to significantly enhance diagnostic and therapeutic applications. Through depthwise separable convolutions and pruning, the approach ensures both computational efficiency and detailed feature extraction, enabling high-accuracy identification of renal tissues and tumors. However, it needs to be extended to other modalities and beyond renal imaging.
Subramaniam et al. [82] claim that this is the first work to present GANs that generate realistic 3D TOF-MRA volumes along with segmentation labels. Their approach involves four variants of 3D Wasserstein GANs (WGAN), including gradient penalty (GP), spectral normalization (SN), and mixed precision models (SN-MP and c-SN-MP). The models using mixed precision yielded the best results, with the lowest FID scores (measuring image quality) and optimal PRD curves (capturing data quality and variety). A key innovation is their ability to generate both 3D image patches and corresponding labels for brain vessel segmentation, which enables training deep learning models like 3D U-Nets in an end-to-end framework. The c-SN-MP model led to the best segmentation performance based on the DSC and bAVD metrics. The study used TOF-MRA data from 137 patients with cerebrovascular disease from two datasets: PEGASUS and 1000Plus. The results demonstrate the benefits of mixed precision for generating realistic 3D volumes and labels. This research paves the way for better sharing of labeled 3D medical data, which could improve deep learning model generalizability and advance medical research in cerebrovascular diseases.
Zi et al. [83] designed a DL model, focusing on critical image regions, with attention-based U-Net, with an encoder–decoder structure, that enhances segmentation and reconstruction by dynamically prioritizing important areas. The proposed models are trained on large public datasets such as ACDC (cardiac MRI), BraTS (brain tumor MRI), and LiTS (liver tumor CT). Pre-processing steps like resampling and data augmentation (adjusting brightness, contrast, rotations, etc.) are applied. Normalizing pixel values also improves model stability and accelerates convergence during training. The proposed model achieved strong results. The integration of the self-attention mechanism significantly improved both segmentation and reconstruction tasks.
Tiago et al. [102] propose a conditional GAN based on the Pix2pix model, which is extended to 3D by incorporating a 3D U-Net as the generator. This translates anatomical label data into echocardiography-like images, enabling the model to perform paired domain translation. This approach introduces an automatic data augmentation pipeline that uses the 3D GAN model to generate additional synthetic 3D echocardiography images and labels. These GAN-generated datasets are valuable for training deep learning models, particularly for tasks like heart segmentation, providing a useful resource for cardiac imaging when real patient data is limited.
Sun et al. [105] propose three models—Per-CycleGAN-CACNN, DualCMP-GAN-CACNN, and DualCMP-GAN-3D ResU—for brain tumor, stroke image generation, and lesion segmentation using GAN and 3D ResU-Net architectures. Per-CycleGAN-CACNN generates corresponding 2D target images, and then recombines the slices into a 3D image, which is then segmented using the CACNN network, producing effective segmentation results. DualCMP-GAN-CACNN focuses on both global and local image details to generate high-precision images by maintaining consistency at different scales, enhancing image quality, and segmentation accuracy for lesions. DualCMP-GAN-3D ResU uses real data from two modalities and simulated data from DualCMP-GAN to perform lesion segmentation with a 3D Residual U-Net. It show superior performance, especially in the segmentation of stroke lesions. These three models offer valuable support for clinical decision-making and treatment by providing clear and precise images.
Three-Dimensional Image Segmentation: The publications featuring 3D medical image segmentation are explained. Elloumi et al. [80] use PGGAN combined with VGG 16+U-Net and ResNet 50+U-Net. Combined VGG 16+U-Net and ResNet 50+U-Net are applied in the generator for image segmentation. The discriminator is constructed using pix2pix PGGAN, which helps characterize images more accurately. DCNN is used, which helps to further enhance the performance of this hybrid architecture, leading to highly accurate segmentation results. This study can be extended to other organs.
Bui et al. [85] use SAM3D, which combines SAM’s transformer-based encoder with a lightweight 3D CNN decoder. It combines SAM’s transformer-based encoder with a lightweight 3D CNN decoder to handle volumetric data more efficiently. It includes 3D convolutional blocks with skip connections, offering an efficient approach to 3D medical image segmentation. It prioritizes precision at the cost of increased complexity and training time. It is simple and computationally efficient while still achieving high segmentation performance. The model uses a frozen SAM encoder to extract features and a custom 3D decoder to capture depth relationships, addressing challenges like weak boundaries in medical images.
Tyagi et al. [86] address two major challenges in medical image segmentation: data scarcity and class imbalance, which can lead to overfitting and poor performance. They propose a novel method CSE-GAN based on a 3D CGAN for lung nodule segmentation. The generator is modeled with a concurrent spatial and channel squeeze and excitation (CSE) module, improving the segmentation performance. It learns features from input patches using ground truth as a reference during training. The discriminator is a simple classification network that uses a spatial squeeze and channel excitation (sScE) module to differentiate between real and fake segmentation masks. This helps in the channel-wise recalibration of feature maps and improves the classification accuracy. To avoid overfitting, they use patch-based training. Back-propagation is applied to both networks to update weights, improving their performance over time. The CSE-GAN outperforms other tested network architectures, such as various U-Net models and R2UNet, highlighting its effectiveness in lung nodule segmentation, also proving its generalizability.
Ge et al. [89] propose an average super-resolution generative adversarial network (ASRGAN), consisting of a generator, which is a 3D convolutional network; with three multi-path average blocks with convolutional layers; followed by the instance normalization and the linear rectification function (ReLU) activation function to produce thin-slice CT images, where feature extraction and image reconstruction are learnt end-to-end at the same time.The discriminator contains five convolutional layers followed by a ReLU activation function and instance normalization, where features are captured by convolutional layers with different kernel sizes and strides, ensuring realistic reconstructions. The network also uses a segmentation process that focuses on both rough and detailed segmentation stages, similar to the 3D U-Net structure. ASRGAN demonstrates strong generalization across different CT scanner models without requiring extra retraining, outperforming other methods.
Liu et al. [93] propose a 3D Edge-aware Attention GAN (3D EAGAN) with a discriminator network to distinguish between predicted and real prostates. The EASNet is built on a U-Net encoder–decoder backbone and incorporates several components: a detail compensation module (DCM), four 3D spatial and channel attention modules (3D SCAM), an edge enhancement module (EEM), and a global feature extractor (GFE). The proposed method significantly improved performance metrics compared to SOTA segmentation methods.
Çelik et al. [96] propose Vol2SegGAN, where pre-processing is carried out, involving steps like brain region extraction, dataset label editing, MNI152 registration, and sampling. Then, segmentation is performed using a generator incorporating Attention Context Fusion (ACFP) and Position Attention Mechanism (PAM), and a discriminator that distinguishes between real and fake data. The model performed best in segmenting cerebrospinal fluid, gray matter, and white matter. It shows potential for use in training for medical professionals.
Vagni et al. [100] adapted Vox2Vox GAN, originally proposed by Cirillo et al. [152]. The Vox2Vox generator uses a combination of U-Net and ResNet architectures. It consists of an encoder–decoder structure connected by skip connections at each level, with a bottleneck containing four residual blocks. The encoder processes 3D input images through convolutional layers to extract hierarchical features, while the decoder upscales the features back to their original spatial dimensions using up-convolutions. The discriminator is a CNN classifier following a PatchGAN style, which classifies each patch of an image as real or fake. This model successfully performed auto-segmentation of the bladder, rectum, and femoral heads from pelvic MRIs with high accuracy, indicating that it could be a robust tool for medical segmentation.
Kanakatte et al. [101] implement a 3D GAN with a generator that includes three main components: an encoder, a decoder with skip connections, and a bottleneck. The bottleneck consists of 3D convolutional layers that reduce the number of parameters and improve feature representation. The feature maps from each encoder layer are concatenated with the respective bottleneck layers. Skip connections between the encoder and decoder help recover lost features during down-sampling, crucial for preserving important medical imaging details. The discriminator incorporates 3D convolution layers and follows Patch GAN architecture. The model’s performance shows high accuracy and matches the performance of 2D models for some classes by effectively incorporating 3D contextual information.
Elloumi et al. [103] use Pix2pix GAN to generate artificial medical images and semi-supervised DCGAN for 3D lung segmentation. The generator follows a U-Net design with one convolution layer in both the encoder and decoder blocks to conserve GPU memory. An improved 3D U-Net network is integrated as the discriminator. Patient data is safeguarded through a watermarking technique using the Schur vector. The simulation results demonstrate that this approach effectively combines deep learning through GANs for medical image segmentation while simultaneously securing the images with an appropriate watermarking algorithm.
Sharaby et al. [104] present a modified Pix2Pix GAN model, where the generator uses a residual U-Net-based model with convolutional blocks for feature extraction and transformation. The encoder–decoder structure captures image features during downsampling and reconstructs the image during upsampling. The discriminator, with convolutional layers, downscales the inputs to focus on high-level features and distinguish between real and generated images, improving the segmentation performance, which is particularly effective in renal diagnosis.
Kermi et al. [108] use a 3D GAN where the generator employs a U-Net-based encoder–decoder architecture, enhanced with a bottleneck block, and the discriminator is structured as a PatchGAN, to segment HGG and LGG glioma sub-regions in 3D brain MRI.
He et al. [113] embed a 3D U-Net into a DCGAN framework to create a semi-supervised 3D liver segmentation algorithm. The 3D U-Net acts as the discriminator to differentiate real from generated images and produce the final segmentation. DCCN is used to generate synthetic images by restoring feature maps from real images. Data pre-processing is also emphasized for better training and the U-Net model, originally designed for 2D medical image segmentation, is extended to a 3D application by modifying all layers to handle 3D data.
Amran et al. [117] use a brain-vessels generative-adversarial-network (BV-GAN) to segment brain blood vessels, which typically suffer from memory and imbalance-related issues. This shortens the analysis time and improves the diagnosis of cerebrovascular vessel disorders.
3D Image Transformation: The publications featuring the 3D medical image transformation are explained. Zhou et al. [78] use a Segmentation Guided Style-based Generative Adversarial Network (SGSGAN), which incorporates a style-based generator, using style modulation which controls and adjusts the hierarchical features during image translation to generate realistic textures. The different features carry varying levels of importance, so the model adjusts them accordingly after each convolutional layer. The discriminator conducts adversarial training with the generator. A segmentation-guided strategy is used to enhance the quality of the images, especially in clinically relevant regions of interest (ROIs). The segmentation network generates masks of specific regions in the image (e.g., liver, brain, kidney, and bladder), which guide the generator to improve the key anatomical regions in the generated image. This method could be extended to other medical imaging, such as CT and MRI image translation, or this model can be particularly enhanced in patch-level synthesis, improving the use of style information.
Joseph et al. [114] use a supervised 3D CycleGAN model consisting of two generators (G1, G2), using U-Net with residual connections, and two discriminators (D1, D2) using CNN. G1 maps CBCT to pseudo-FBCT images, and G2 maps pseudo-FBCT images back to CBCT. U-Net includes long skip connections to preserve the structural similarities between CBCT and FBCT images. The model incorporates identity loss and a gradient difference loss (GDL) to improve the accuracy of the transformation. D1 and D2 evaluate these respective images. Results indicate that the proposed CycleGAN model performs well with pseudo-FBCT images closely resembling the real FBCT images.
Three-Dimensional Image Enhancement: The publications featuring 3D medical image enhancement are explained. Dong et al. [115] improve the quality of 3D MRI by using a Denoising CycleGAN and Enhancement CycleGAN. The Denoising CycleGAN denoises the cine images and the Enhancement CycleGAN enhances the spatial resolution and contrast. This framework enhances the image quality significantly with high computational efficiency.
Zhang et al. [116] propose a Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (SOUP-GAN), which produces high-resolution, thinner slices of image, with anti-aliasing and deblurring, outperforming other conventional resolution-enhancement methods and previous SR work on medical images based on both qualitative and quantitative comparisons.
Zhang et al. [119] propose a long short-term memory and attention-based generative adversarial network (LSTMAGAN) to create a super-resolution reconstruction of 3D medical images, where an attention gate is added to the generator network to enhance the feature information and suppress the role of background noise; LSTM is used in the discriminator network. This model outperforms other models for the task of super-resolution.
The literature survey regarding the use of GANs in 3D medical imaging is summarized in Table 5.

3.5. Public Datasets

In the reviewed papers, some datasets were public datasets that could easily be accessed, while others were private. The private datasets are not shared due to ethical constraints, privacy concerns of the patients, or institutional governance policies, which limit the reproducibility and model’s generalizability. Table 6 shows information on the publicly used datasets in the research papers. The majority of the organs targeted are the brain and lungs, while the majority of image modalities used are MRI or CT images, so the most commonly used public datasets related to these organs and modalities are BRaTS and LUNA. The public datasets help facilitate broader research and model development, while private datasets deal with the issues of potential privacy breach and data bias [175].

3.6. Code Availability

Some research papers have released their codes, which can help speed up the research process, while others have not provided access to their code. This hinders reproducibility and could be due to several reasons, including institutional restrictions, lack of documentation to document or package code for reuse, and patent intent, to name a few. The accessible links to codes featured in the reviewed papers are displayed in Table 7.
For a detailed analysis on all the extracted research papers regarding the dataset used, code availability, public/private accessibility, direct code links where available, and reasons for unavailability, refer to Supplementary Table S2.

3.7. Evaluation Metrics

The evaluation metrics used in the reviewed publications include accuracy, Dice coefficient, Frechet Inception Distance (FID), Structural Similarity Index Measure (SSIM), MS-SSIM, PSNR, Learned Perceptual Image Patch Similarity (LPIPS), Kernel Inception Distance (KID), Inception Score (IS), and Hausdorff distance (HD), etc. For each paper, the evaluation metrics used, along with the performance of the model, are summarized in Table 8.
Segmentation Metrics: The Dice coefficient measures the reproducibility and evaluates the similarity or comparison between predicted segmentation and ground truth segmentation [177]. Dong et al., 2024 [115], Kim et al., 2024 [97], and Vagni et al., 2024 [100] use the Dice coefficient; however, it may not reflect clinical relevance in small structures. HD assesses the segmentation accuracy by giving a measure of the dissimilarity between two sets of points of a segmented structure and the ground truth [177]. Sun et al. [105], Rezaei et al. [79], and Çelik et al. [96] use HD; however, it is sensitive to spatial outliers.
Synthetic Generation Evaluation Metrics: FID, introduced by [178], uses the pretrained inception network on the ImageNet dataset [179] to evaluate the quality of GAN generated images. It is a measure of the distance between the feature distributions of GAN-generated images and the real images from the training dataset of GAN [180]. Kim et al. [73], Aydin et al., 2024 [90], and Zhou et al. [94] use FID; however, it compares distributions rather than individual samples. SSIM assesses the structural information, contrast, and luminance to evaluate the quality of the reconstructed image by comparing it to the reference image [181]. The assumption made here is that the human visual system is highly adapted to extract image structures [119]. IS assesses the quality and diversity of the generated image using predicted class probabilities from a pre-trained classifier, often Inception V3 [182,183]. Zhou et al. [78], Zi et al. [83], and Sun et al. [98] use SSIM; however, it is not suitable for images with textures and patterns, as it may overestimate smooth regions and underestimate the detection of anatomical distortions. LPIPS evaluates the perceptual similarity between generated image and image from the real dataset [183], capturing more complex and exquisite visual differences that align with human perception [184]. Kim et al. [73] and Gao et al. [107] use LPIPS; however, it may be influenced by the choice of feature extractor. KID offers greater stability for smaller sample sizes. It assesses the similarity between the real and the generated images by employing a polynomial kernel-based Maximum Mean Discrepancy (MMD). Jung et al. [88] use KID; however, it compares distributions like FID. MMD assesses the similarity between images by measuring the distance between the distribution of features extracted from generated and real images, where a smaller MMD represents higher similarity [183]. Kim et al. [73], Zhou et al. [94], and Mensing et al. [99] use MMD; however, it heavily depends on kernel size.
Human Evaluation Consideration: Visual Turing Tests (VTT) evaluate the quality and realism of the generated images by involving human experts like radiologists or clinicians to distinguish between real and synthetic medical images [185]. However, these are subject to individual variability and are not standardized. Figure 10 summarizes the distribution of evaluation metrics in the review studies.

4. Discussion and Challenges

This systematic review highlights the advancements in the applications of AI in 3D medical image analysis, particularly focusing on GANs, across different image modalities, including MRI, CT, X-ray, and ultrasound. These advancements hold promising results for improving the diagnostic accuracy and earlier detection of various diseases and abnormalities. The 3D medical imaging field has been significantly impacted by GANs, as demonstrated by the reviewed literature for medical imaging tasks like image generation, segmentation, modality translation, reconstruction, etc. This has enabled promising advancements in both clinical and research settings. However, some challenges remain and are mentioned below.
Data Scarcity and Limited Amounts of Data Availability: Limited 3D medical imaging datasets are a core challenge when it comes to training robust generative models. Models are trained on relatively small datasets, since high-quality 3D medical datasets are scarce due to privacy constraints, acquisition costs, and the need for expert annotations; therefore, the model’s generalizability is affected. The contemporary literature has increasingly addressed this issue by employing GANs to produce synthetic 3D volumes that facilitate modeling tasks like segmentation, reconstruction, and modality translations. A few studies used private datasets that are not readily available for use, therefore limiting the progress in this field, due to reproducibility restrictions. This challenge is addressed by several works through data augmentation, utilizing GANs to expand the training datasets by generating 3D medical image volumes that mimic the real medical datasets. For example Kim et al. [73], Zhou et al. [92], and Aydin et al. [90] generate high-fidelity 3D medical images to enrich datasets; Sun et al. [74] generate high-resolution 3D images for training models; and King et al. [91] generates synthetic MRIs for classification tasks, to name a few. These studies not only increased the dataset sizes but also produced anatomical variability to encourage robust training of generalizable 3D models. These synthetic volumes are directly used for synthetic modeling tasks, and not only classification, which addresses the data scarcity challenge. Takeaway: In domains where data is scarce, 3D GAN-based augmentation shows promising advancements beyond classification, especially in segmentation and reconstruction tasks, e.g., organ boundary refinement and brain tumor delineation.
Computational Complexity and Training Constraints: Training 3D GAN models on volumetric data increases the computational complexity of the model due to increased dimensionality and memory requirements. To mitigate memory constraints, recent studies have explored techniques such as model quantization, sparse convolutions, and patch-based training. For example, quantization has been used in 3D medical image anomaly detection, which reduces the memory usage while preserving the performance, thereby enabling deployment in resource-limited environments. Additionally, some works employ architectural simplifications, like reduced receptive field sizes or channel pruning, to reduce the computational overhead while retaining the anatomical fidelity. Several studies extend 2D architectures to 3D by inflating convolutional kernels or modifying input dimensions; Liu et al. [67] used pretrained 2D convolutional weights and inflated them for 3D image synthesis and Aydin et al. [90] extend StyleGAN2 (originally trained on 2D images) to 3D medical imaging. Transfer learning involves leveraging pretrained weights from related tasks or domains, where the pretrained models (often trained on large 2D datasets) are fine-tuned for volumetric synthesis or reconstruction, as demonstrated by Rezaei et al. [79], which uses pretrained models to improve 3D tumor reconstruction. While these strategies can reduce data requirements and training time, they may compromise image resolution and anatomical fidelity. Therefore, benchmarking lightweight architectures that optimize both the clinical accuracy and computational efficiency remains a critical priority. Takeaway: To mitigate the computational complexity and high memory demands of 3D GANs, techniques like patch-based training, quantization, and mixed precision are being increasingly adopted.
Code availability and Reproducibility: The limited availability of source code and pre-trained datasets is a major barrier to progress in this field. Since most of the studies did not allow access to their code, this creates a barrier to reproducing and advancing in that research. The few studies that offer reproducibility by providing their codes, to facilitate benchmarking and thus comparative studies, are those of Sun et al. [74], Subramaniam et al. [82], Bui et al. [85], Jung et al. [88], Aydin et al. [90], and Zhang et al. [110]. The research community could greatly benefit from open repositories and standardized reporting practices, as this would enhance transparency and reproducibility across this field.
Generalizability and Modality Bias: Most of the research papers focus on training the GAN models using MRI/CT, leaving the other modalities unexplored, e.g., PET, ultrasound, and SPECT. Due to this modality bias, the generalizability of the current models is affected, so the models’ capacities to work with other image modalities may be limited when performing tasks that include cross-modality or multi-modal problems. Therefore, shifting their domain may cause the models to show degraded performance. Addressing this challenge requires training strategies and modality-aware architectures that are capable of handling diverse imaging characteristics.
Clinical Reliability and Ethical Considerations: Ensuring the reliability of GAN-generated 3D medical images is a critical challenge, especially in applications that directly impact patient care. These include super-resolution, denoising, and image translation. The contemporary literature addresses this through a combination of quantitative metrics, clinical expert evaluation, and task-specific validation. Some studies assess the anatomical fidelity using segmentation accuracy, others incorporate radiologist scoring or blinded expert evaluation for clinical realism. Another important aspect is the hallucination artifacts, where GAN introduces anatomically unrealistic structures, and information dropout, where some clinically relevant features are lost. These remain underexplored, and these undetected hallucination artifacts in applications like image translations(for example, generating PET from MRI) can lead to inaccurate diagnosis and treatment planning. These are potential directions for future works and need to be addressed with priority. The generated images need to be validated clinically to ensure their reliability, as they may cause ethical concerns for decision-making or diagnostic errors. Some studies utilize hybrid models that involve combining GANs with other generative models (e.g., VAEs or diffusion models) to improve the fidelity and diversity of generated images. For example, Zhou et al. [92] combine vector quantization with GANs for enhanced image synthesis. The clinical validation remains limited, prompting future research to consider explainability and real-world testing. Takeaway: To address the risk of hallucination and information dropout, some studies employed expert annotation, downstream task evaluation, and cross-modality comparison. Touati et al. [122] use cross-modality comparison to avoid these issues of missing anatomical details.
This field is rapidly evolving towards multi-tasking frameworks and memory-efficient architectures, leading to an increased inclination towards hybrid models, cross-modal synthesis, and more generalized and transferable generative frameworks. However, some clinical barriers are hindering clinical deployment; these include a lack of explainability, leading to trust issues by human experts, and privacy concerns, limiting the reproducibility. These need to be addressed through transparency, interdisciplinary collaboration, and evaluation protocols. Another emerging alternative is the diffusion models that are quickly surmounting in contemporary medical imaging applications; however, they are computationally complex, requiring longer inference times and larger datasets. In emerging research, hybrid models of GANs and diffusion models may become the next focus of exploration for medical imaging tasks.
Limitations: Our exclusion of 2.5D, patch-wise, stacked 2D, and borderline cases may omit some hybrid approaches that may contribute to volumetric synthesis. While this decision was to ensure consistency in fully 3D GANs, this may affect the coverage of transitional architectures. This review does not cover diffusion models, which are an evolving class in 3D medical imaging, reflecting the scope of this review, which focuses exclusively on GANs.

5. Future Directions

According to the recent advancements and challenges discussed thus far, several possible future directions are emerging in the field of 3D medical imaging analysis using GANs, reflecting technical innovation and clinical/computational constraints for 3D images.
Multi-Modality Synthesis for Diagnosis: Future GAN models should focus on extending from single-modality generation towards multi-modality image synthesis. This trend can greatly enhance the diagnosis procedure, especially in settings where some modalities are quite costly or completely unavailable. These cross-modal GANs can improve diagnostic reliability, as well as clinical trust.
Working on Under-Explored Modalities like PET, Ultrasound, or SPECT Rather than MRI and CT: The current research is dominated by MRI and CT. Future works can explore tailored-GANs for modalities like PET, ultrasound, and SPECT, which can broaden synthetic data generation across the different clinical environments.
Architecturally Efficient GAN Models with Less Computational Complexity to Counter the High-Memory Demands for 3D GANs: This is crucial for implication in real-time synthesis and integration in clinical settings, while using limited hardware.
Hybrid Generative Frameworks Combining GANs with Transformers, VAEs, or Diffusion Models: This approach can enhance the diversity, stability, and interpretability.
Personalized GANs for Patient-Specific and Rare Pathological Conditions: GANs can be personalized for this task to stimulate the disease progression, which can help with personalized treatment plans and also training datasets for these diseases.
Enhanced Clinical Reliability: Future research must prioritize the implementation of hallucination-aware metrics, such as uncertainty quantification and voxel-wise anomaly detection, in order to assess the fidelity and detect implausible structures. Clinical task validation, where the synthetic image is evaluated based on the influence on diagnostic accuracy, segmentation performance, and treatment planning outcomes, should be adopted as a standard practice. Explainability frameworks should be explored to build trust in synthetic data and for integration into clinical workflows.

6. Conclusions

GANs have rapidly evolved the field of medical imaging analysis; however, the majority of existing surveys focus on 2D medical imaging applications or generalized overviews for 3D medical imaging analysis up to the year 2022. This systematic review addresses this critical gap by providing the first comprehensive review of GAN-based methods for 3D medical imaging, focusing on the latest innovations in GAN architectures, published between 2022 and 2025. The 3D nature of data introduces computational, architectural, and clinical complexity. Through the in-depth analysis of 3D GAN models, their applications, discussions, limitations, and future directions, this systematic review aims to guide researchers and practitioners on effective solutions for volumetric data analysis. GANs consist of two neural networks working together to generate and evaluate images. In medical imaging, GANs use advanced image analysis techniques to enhance image quality, generate crucial training data, and improve disease detection and diagnosis. They help boost diagnostic accuracy and expand small datasets, contributing to better patient care and research outcomes, highlighting opportunities for future research, making it a valuable resource for researchers and investors looking to innovate in medical imaging. In this review, we categorized by data type (2D or 3D), with a focus on 3D data, and further group them by imaging modalities like CT, MRI, and PET, as well as datasets. The publications showed a significant evolution in the GAN architectures, contributing meaningfully to the 3D medical imaging tasks such as image generation, segmentation, reconstruction, and cross-modality translation.
Despite these advancements, some challenges remain that include computational complexity, generalization bias, underexploration of modalities other than MRI/CT, unavailability of code for reproducibility, and ethical concerns. These gaps can present possible future directions for researchers. In conclusion, exploring and continuing research into more efficient GAN architectures and multi-modal operations may enhance their performance further and unlock GAN’s full potential for medical practices.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/app152011219/s1: Table S1: Table depicting Database and Search Strategy for study selection. Table S2: Summary of dataset and code availability across included studies, highlighting public/private accessibility, direct code links where available, and outlining reasons for unavailability.

Author Contributions

Conceptualization, A.A. and J.C.; methodology, resources, data curation, formal analysis, writing—original draft preparation, Z.U.; writing—review and editing, A.A.; supervision, A.A. and J.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This study does not include experimental data.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Boeken, T.; Feydy, J.; Lecler, A.; Soyer, P.; Feydy, A.; Barat, M.; Duron, L. Artificial intelligence in diagnostic and interventional radiology: Where are we now? Diagn. Interv. Imaging 2023, 104, 1–5. [Google Scholar] [CrossRef] [PubMed]
  2. Achour, N.; Zapata, T.; Saleh, Y.; Pierscionek, B.; Azzopardi-Muscat, N.; Novillo-Ortiz, D.; Morgan, C.; Chaouali, M. The role of AI in mitigating the impact of radiologist shortages: A systematised review. Health Technol. 2025, 15, 489–501. [Google Scholar] [CrossRef] [PubMed]
  3. Najjar, R. Redefining radiology: A review of artificial intelligence integration in medical imaging. Diagnostics 2023, 13, 2760. [Google Scholar] [CrossRef] [PubMed]
  4. Pinto-Coelho, L. How artificial intelligence is shaping medical imaging technology: A survey of innovations and applications. Bioengineering 2023, 10, 1435. [Google Scholar] [CrossRef]
  5. Syryh, A.S.; Bondarenko, G.O. 3D Tumor Segmentation with Interpolation using Deep Neural Networks Based on 3D Medical Images for Subsequent 3D Visualization. In Proceedings of the 2024 V International Conference on Neural Networks and Neurotechnologies (NeuroNT), Saint Petersburg, Russia, 20 June 2024; IEEE: New York, NY, USA, 2024; pp. 72–74. [Google Scholar]
  6. Singh, A. Significance of Generative AI in Medicine and Healthcare. 2025. Available online: https://www.researchgate.net/publication/389521891_Significance_of_Generative_AI_in_Medicine_and_Healthcare (accessed on 2 October 2025).
  7. Shakya, K.S.; Alavi, A.; Porteous, J.; K, P.; Laddi, A.; Jaiswal, M. A Critical Analysis of Deep Semi-Supervised Learning Approaches for Enhanced Medical Image Classification. Information 2024, 15, 246. [Google Scholar] [CrossRef]
  8. Liu, Z.; Alavi, A.; Li, M.; Zhang, X. Self-supervised contrastive learning for medical time series: A systematic review. Sensors 2023, 23, 4221. [Google Scholar] [CrossRef]
  9. Afridi, S.; Khattak, M.I.; Irfan, M.A.; Jan, A.; Asif, M. Deep Learning Techniques for 3D-Volumetric Segmentation of Biomedical Images. In Advances in Deep Generative Models for Medical Artificial Intelligence; Springer: Cham, Switzerland, 2023; pp. 1–41. [Google Scholar]
  10. Liaw, Z.K.; Das, A.; Hussain, S.; Yang, F.; Liu, Y.; Goh, R.S.M. SegMAE-Net: A Hybrid Method Using Masked Autoencoders for Consistent 3D Medical Image Segmentation. In Proceedings of the 2024 IEEE Conference on Artificial Intelligence (CAI), Singapore, 25–27 June 2024; IEEE: New York, NY, USA, 2024; pp. 1272–1277. [Google Scholar]
  11. Li, J.; Chen, S.; Ma, S.; Guo, F.; Tang, J. MixUNet: Mix the 2D and 3D Models for Robust Medical Image Segmentation. In Proceedings of the 2023 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Istanbul, Turkiye, 5–8 December 2023; IEEE: New York, NY, USA, 2023; pp. 1242–1247. [Google Scholar]
  12. Xu, X.; Lu, W.; Lei, J.; Qiu, P.; Shen, H.B.; Yang, Y. SliceProp: A Slice-Wise Bidirectional Propagation Model for Interactive 3D Medical Image Segmentation. In Proceedings of the 2023 IEEE International Conference on Medical Artificial Intelligence (MedAI), Beijing, China, 18–19 November 2023; IEEE: New York, NY, USA, 2023; pp. 414–424. [Google Scholar]
  13. Reyes-Santias, F.; García-García, C.; Aibar-Guzmán, B.; García-Campos, A.; Cordova-Arevalo, O.; Mendoza-Pintos, M.; Cinza-Sanjurjo, S.; Portela-Romero, M.; Mazón-Ramos, P.; Gonzalez-Juanatey, J.R. Cost analysis of magnetic resonance imaging and computed tomography in cardiology: A case study of a university hospital complex in the Euro region. Healthcare 2023, 11, 2084. [Google Scholar] [CrossRef]
  14. Murali, S.; Ding, H.; Adedeji, F.; Qin, C.; Obungoloch, J.; Asllani, I.; Anazodo, U.; Ntusi, N.A.; Mammen, R.; Niendorf, T.; et al. Bringing MRI to low- and middle-income countries: Directions, challenges and potential solutions. NMR Biomed. 2024, 37, e4992. [Google Scholar] [CrossRef]
  15. Ghadimi, M.; Sapra, A. Magnetic Resonance Imaging Contraindications. 2019. Available online: https://www.ncbi.nlm.nih.gov/books/NBK551669/ (accessed on 2 October 2025).
  16. He, X.; Chu, X. MedPipe: End-to-End Joint Search of Data Augmentation and Neural Architecture for 3D Medical Image Classification. In Proceedings of the 2023 IEEE International Conference on Medical Artificial Intelligence (MedAI), Beijing, China, 18–19 November 2023; IEEE: New York, NY, USA, 2023; pp. 344–354. [Google Scholar]
  17. Zhang, S.; Li, Z.; Zhou, H.Y.; Ma, J.; Yu, Y. Advancing 3D medical image analysis with variable dimension transform based supervised 3D pre-training. Neurocomputing 2023, 529, 11–22. [Google Scholar] [CrossRef]
  18. Amirian, M.; Barco, D.; Herzig, I.; Schilling, F.P. Artifact Reduction in 3D and 4D Cone-beam Computed Tomography Images with Deep Learning-A Review. IEEE Access 2024, 12, 10281–10295. [Google Scholar] [CrossRef]
  19. Shao, L.; Chen, B.; Zhang, Z.; Zhang, Z.; Chen, X. Artificial intelligence generated content (AIGC) in medicine: A narrative review. Math. Biosci. Eng. 2024, 21, 1672–1711. [Google Scholar] [CrossRef]
  20. Maken, P.; Gupta, A. 2D-to-3D: A review for computational 3D image reconstruction from X-ray images. Arch. Comput. Methods Eng. 2023, 30, 85–114. [Google Scholar] [CrossRef]
  21. Fernandez, V.; Pinaya, W.H.L.; Borges, P.; Graham, M.S.; Vercauteren, T.; Cardoso, M.J. A 3D generative model of pathological multi-modal MR images and segmentations. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2023; pp. 132–142. [Google Scholar]
  22. Kaswan, K.S.; Dhatterwal, J.S.; Malik, K.; Baliyan, A. Generative AI: A Review on Models and Applications. In Proceedings of the 2023 International Conference on Communication, Security and Artificial Intelligence (ICCSAI), Greater Noida, India, 23–25 November 2023; IEEE: New York, NY, USA, 2023; pp. 699–704. [Google Scholar]
  23. Sai, S.; Gaur, A.; Sai, R.; Chamola, V.; Guizani, M.; Rodrigues, J.J. Generative ai for transformative healthcare: A comprehensive study of emerging models, applications, case studies and limitations. IEEE Access 2024, 12, 31078–31106. [Google Scholar] [CrossRef]
  24. Kuzlu, M.; Xiao, Z.; Sarp, S.; Catak, F.O.; Gurler, N.; Guler, O. The rise of generative artificial intelligence in healthcare. In Proceedings of the 2023 12th Mediterranean Conference on Embedded Computing (MECO), Budva, Montenegro, 6–10 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–4. [Google Scholar]
  25. Rouzrokh, P.; Khosravi, B.; Faghani, S.; Moassefi, M.; Shariatnia, M.M.; Rouzrokh, P.; Erickson, B. A Current Review of Generative AI in Medicine: Core Concepts, Applications, and Current Limitations. Curr. Rev. Musculoskelet. Med. 2025, 18, 246–266. [Google Scholar] [CrossRef] [PubMed]
  26. Hussain, J.; Båth, M.; Ivarsson, J. Generative adversarial networks in medical image reconstruction: A systematic literature review. Comput. Biol. Med. 2025, 191, 110094. [Google Scholar] [CrossRef]
  27. Ha, J.; Park, J.S.; Crandall, D.; Garyfallidis, E.; Zhang, X. Multi-Resolution Guided 3D GANs for Medical Image Translation. In Proceedings of the 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Tucson, AZ, USA, 26 February–6 March 2025; IEEE: New York, NY, USA, 2025; pp. 4342–4351. [Google Scholar]
  28. Islam, S.; Aziz, M.T.; Nabil, H.R.; Jim, J.R.; Mridha, M.F.; Kabir, M.M.; Asai, N.; Shin, J. Generative adversarial networks (GANs) in medical imaging: Advancements, applications, and challenges. IEEE Access 2024, 12, 35728–35753. [Google Scholar] [CrossRef]
  29. Chai, P.; Hou, L.; Zhang, G.; Tushar, Q.; Zou, Y. Generative adversarial networks in construction applications. Autom. Constr. 2024, 159, 105265. [Google Scholar] [CrossRef]
  30. Bhuyan, S.S.; Sateesh, V.; Mukul, N.; Galvankar, A.; Mahmood, A.; Nauman, M.; Rai, A.; Bordoloi, K.; Basu, U.; Samuel, J. Generative artificial intelligence use in healthcare: Opportunities for clinical excellence and administrative efficiency. J. Med. Syst. 2025, 49, 10. [Google Scholar] [CrossRef]
  31. Peng, Y. A comparative analysis between gan and diffusion models in image generation. Trans. Comput. Sci. Intell. Syst. Res. 2024, 5, 189–195. [Google Scholar] [CrossRef]
  32. Vivekananthan, S. Comparative analysis of generative models: Enhancing image synthesis with vaes, gans, and stable diffusion. arXiv 2024, arXiv:2408.08751. [Google Scholar] [CrossRef]
  33. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27. [Google Scholar] [CrossRef]
  34. Chiang, Y.H.; Tseng, B.Y.; Wang, J.P.; Chen, Y.W.; Tung, C.C.; Yu, C.H.; Chen, P.Y.; Chen, C.S. Generating three-dimensional bioinspired microstructures using transformer-based generative adversarial network. J. Mater. Res. Technol. 2023, 27, 6117–6134. [Google Scholar] [CrossRef]
  35. Hu, Y.; Kothapalli, S.V.; Gan, W.; Sukstanskii, A.L.; Wu, G.F.; Goyal, M.; Yablonskiy, D.A.; Kamilov, U.S. DiffGEPCI: 3D MRI synthesis from mGRE signals using 2.5 D diffusion model. In Proceedings of the 2024 IEEE International Symposium on Biomedical Imaging (ISBI), Athens, Greece, 27–30 May 2024; IEEE: New York, NY, USA, 2024; pp. 1–4. [Google Scholar]
  36. Narotamo, H.; Ouarné, M.; Franco, C.A.; Silveira, M. Synthetic Generation of 3D Microscopy Images using Generative Adversarial Networks. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, Scotland, UK, 11–15 July 2022; IEEE: New York, NY, USA, 2022; pp. 549–552. [Google Scholar]
  37. Mishra, A.; Majumder, A.; Kommineni, D.; Joseph, C.A.; Chowdhury, T.; Anumula, S.K. Role of Generative Artificial Intelligence in Personalized Medicine: A Systematic Review. Cureus 2025, 17, e82310. [Google Scholar] [CrossRef] [PubMed]
  38. Chen, R.; Huang, W.; Huang, B.; Sun, F.; Fang, B. Reusing discriminators for encoding: Towards unsupervised image-to-image translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Beijing, China, 28 March 2020; IEEE: New York, NY, USA, 2020; pp. 8168–8177. [Google Scholar]
  39. Dalmaz, O.; Yurt, M.; Çukur, T. ResViT: Residual vision transformers for multimodal medical image synthesis. IEEE Trans. Med. Imaging 2022, 41, 2598–2614. [Google Scholar] [CrossRef] [PubMed]
  40. Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; IEEE: New York, NY, USA, 2017; pp. 1125–1134. [Google Scholar]
  41. Kong, L.; Lian, C.; Huang, D.; Hu, Y.; Zhou, Q. Breaking the dilemma of medical image-to-image translation. Adv. Neural Inf. Process. Syst. 2021, 34, 1964–1978. [Google Scholar]
  42. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017; pp. 2223–2232. [Google Scholar]
  43. Ellis, S.; Manzanera, O.E.M.; Baltatzis, V.; Nawaz, I.; Nair, A.; Folgoc, L.L.; Desai, S.; Glocker, B.; Schnabel, J.A. Evaluation of 3D GANs for lung tissue modelling in pulmonary CT. arXiv 2022, arXiv:2208.08184. [Google Scholar] [CrossRef]
  44. Chen, K.; Ramsey, L. Deep Generative Models for 3D Content Creation: A Comprehensive Survey of Architectures, Challenges, and Emerging Trends. Preprint 2024. [Google Scholar] [CrossRef]
  45. Subasi, A. Artificial intelligence for 3D medical image analysis. In Applications of Artificial Intelligence Healthcare and Biomedicine; Elsevier: Amsterdam, The Netherlands, 2024; pp. 357–375. [Google Scholar]
  46. Mamo, A.A.; Gebresilassie, B.G.; Mukherjee, A.; Hassija, V.; Chamola, V. Advancing Medical Imaging Through Generative Adversarial Networks: A Comprehensive Review and Future Prospects. Cogn. Comput. 2024, 16, 2131–2153. [Google Scholar] [CrossRef]
  47. Heng, Y.; Yinghua, M.; Khan, F.G.; Khan, A.; Ali, F.; AlZubi, A.A.; Hui, Z. Survey: Application and analysis of generative adversarial networks in medical images. Artif. Intell. Rev. 2024, 58, 39. [Google Scholar] [CrossRef]
  48. Jeong, J.J.; Tariq, A.; Adejumo, T.; Trivedi, H.; Gichoya, J.W.; Banerjee, I. Systematic review of generative adversarial networks (GANs) for medical image classification and segmentation. J. Digit. Imaging 2022, 35, 137–152. [Google Scholar] [CrossRef]
  49. Ali, M.; Ali, M.; Hussain, M.; Koundal, D. Generative adversarial networks (GANs) for medical image processing: Recent advancements. Arch. Comput. Methods Eng. 2025, 32, 1185–1198. [Google Scholar] [CrossRef]
  50. Alshanbari, A.H.; Alzahrani, S.M. Generative AI for Diagnostic Medical Imaging: A Review. Curr. Med. Imaging 2025, 21, E15734056369157. [Google Scholar] [CrossRef]
  51. Ferreira, A.; Li, J.; Pomykala, K.L.; Kleesiek, J.; Alves, V.; Egger, J. GAN-based generation of realistic 3D volumetric data: A systematic review and taxonomy. Med. Image Anal. 2024, 93, 103100. [Google Scholar] [CrossRef] [PubMed]
  52. Oulmalme, C.; Nakouri, H.; Jaafar, F. A systematic review of generative AI approaches for medical image enhancement: Comparing GANs, transformers, and diffusion models. Int. J. Med. Inform. 2025, 199, 105903. [Google Scholar] [CrossRef]
  53. Sharafudeen, M.; Vinod Chandra, S. Medical deepfake detection using 3-dimensional neural learning. In Proceedings of the IAPR Workshop on Artificial Neural Networks in Pattern Recognition, Dubai, United Arab Emirates, 24–26 November 2022; Springer: Cham, Switzerland, 2022; pp. 169–180. [Google Scholar]
  54. Kearney, V.; Ziemer, B.P.; Perry, A.; Wang, T.; Chan, J.W.; Ma, L.; Morin, O.; Yom, S.S.; Solberg, T.D. Attention-aware discrimination for MR-to-CT image translation using cycle-consistent generative adversarial networks. Radiol. Artif. Intell. 2020, 2, e190027. [Google Scholar] [CrossRef]
  55. Saranya, K.; Valarmathi, A. Secure Medical Image Transmission and Storage in IoT Cloud Using GAN-RBM with Real-Time Analysis. 2024. Available online: https://www.researchsquare.com/article/rs-4847590/v1 (accessed on 2 October 2025).
  56. Peng, W.; Xia, T.; Ribeiro, F.D.S.; Bosschieter, T.; Adeli, E.; Zhao, Q.; Glocker, B.; Pohl, K.M. Latent 3D Brain MRI Counterfactual. arXiv 2024, arXiv:2409.05585. [Google Scholar] [CrossRef]
  57. Gao, J.; Zhao, W.; Li, P.; Huang, W.; Chen, Z. LEGAN: A Light and Effective Generative Adversarial Network for medical image synthesis. Comput. Biol. Med. 2022, 148, 105878. [Google Scholar] [CrossRef]
  58. Ibrahim, M.; Al Khalil, Y.; Amirrajab, S.; Sun, C.; Breeuwer, M.; Pluim, J.; Elen, B.; Ertaylan, G.; Dumontier, M. Generative AI for synthetic data across multiple medical modalities: A systematic review of recent developments and challenges. Comput. Biol. Med. 2025, 189, 109834. [Google Scholar] [CrossRef]
  59. Kazeminia, S.; Baur, C.; Kuijper, A.; Van Ginneken, B.; Navab, N.; Albarqouni, S.; Mukhopadhyay, A. GANs for medical image analysis. Artif. Intell. Med. 2020, 109, 101938. [Google Scholar] [CrossRef]
  60. Reddy, S. Generative AI in healthcare: An implementation science informed translational path on application, integration and governance. Implement. Sci. 2024, 19, 27. [Google Scholar] [CrossRef]
  61. Preim, B.; Bartz, D. Visualization in Medicine: Theory, Algorithms, and Applications; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  62. Segato, A.; Corbetta, V.; Di Marzo, M.; Pozzi, L.; De Momi, E. Data augmentation of 3D brain environment using deep convolutional refined auto-encoding alpha GAN. IEEE Trans. Med. Robot. Bionics 2020, 3, 269–272. [Google Scholar] [CrossRef]
  63. Shen, D.; Liu, T.; Peters, T.M.; Staib, L.H.; Essert, C.; Zhou, S.; Yap, P.T.; Khan, A. Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference, Shenzhen, China, 13–17 October 2019, Proceedings, Part II; Springer Nature: Cham, Switzerland, 2019; Volume 11765. [Google Scholar]
  64. Xing, S.; Sinha, H.; Hwang, S.J. Cycle consistent embedding of 3D brains with auto-encoding generative adversarial networks. In Proceedings of the Medical Imaging with Deep Learning, Lübeck, Germany, 12 May 2021. [Google Scholar]
  65. Sun, L.; Chen, J.; Xu, Y.; Gong, M.; Yu, K.; Batmanghelich, K. Hierarchical amortized training for memory-efficient high resolution 3D GAN. arXiv 2020, arXiv:2008.01910. [Google Scholar]
  66. Messaoudi, H.; Belaid, A.; Salem, D.B.; Conze, P.H. Cross-dimensional transfer learning in medical image segmentation with deep learning. Med. Image Anal. 2023, 88, 102868. [Google Scholar] [CrossRef]
  67. Liu, Y.; Dwivedi, G.; Boussaid, F.; Sanfilippo, F.; Yamada, M.; Bennamoun, M. Inflating 2D convolution weights for efficient generation of 3D medical images. Comput. Methods Programs Biomed. 2023, 240, 107685. [Google Scholar] [CrossRef]
  68. Singh, S.P.; Wang, L.; Gupta, S.; Goli, H.; Padmanabhan, P.; Gulyás, B. 3D deep learning on medical images: A review. Sensors 2020, 20, 5097. [Google Scholar] [CrossRef]
  69. Shiri, M.; Bruno, A.; Loiacono, D. Memory-Efficient 3D High-Resolution Medical Image Synthesis Using CRF-Guided GANs. In Proceedings of the International Conference on Pattern Recognition, Kolkata, India, 1 December 2024; Springer: Cham, Switzerland, 2024; pp. 184–194. [Google Scholar]
  70. Ju, Z.; Zhou, W.; Kong, L.; Chen, Y.; Li, Y.; Sun, Z.; Shan, C. HAGAN: Hybrid Augmented Generative Adversarial Network for Medical Image Synthesis. arXiv 2024, arXiv:2405.04902. [Google Scholar] [CrossRef]
  71. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  72. Whiting, P.; Savović, J.; Higgins, J.P.; Caldwell, D.M.; Reeves, B.C.; Shea, B.; Davies, P.; Kleijnen, J.; Churchill, R.; ROBIS Group. ROBIS: A new tool to assess risk of bias in systematic reviews was developed. J. Clin. Epidemiol. 2016, 69, 225–234. [Google Scholar] [CrossRef]
  73. Kim, J.; Li, Y.; Shin, B.S. 3D-DGGAN: A Data-Guided Generative Adversarial Network for High Fidelity in Medical Image Generation. IEEE J. Biomed. Health Inform. 2024, 28, 2904–2915. [Google Scholar] [CrossRef] [PubMed]
  74. Sun, L.; Chen, J.; Xu, Y.; Gong, M.; Yu, K.; Batmanghelich, K. Hierarchical amortized GAN for 3D high resolution medical image synthesis. IEEE J. Biomed. Health Inform. 2022, 26, 3966–3975. [Google Scholar] [CrossRef] [PubMed]
  75. Prakash, P.S.; Rao, P.K.; Babu, E.S.; Khan, S.B.; Almusharraf, A.; Quasim, M.T. Decoupled SculptorGAN Framework for 3D Reconstruction and Enhanced Segmentation of Kidney Tumors in CT Images. IEEE Access 2024, 12, 62189–62198. [Google Scholar] [CrossRef]
  76. Hwang, S.; Lee, J.J.; Shin, J. 3D Knee Structure Reconstruction from 2D X-rays Based on Generative Deep Learning Models. In Proceedings of the 2024 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC), Okinawa, Japan, 2–5 July 2024; IEEE: New York, NY, USA, 2024; pp. 1–5. [Google Scholar]
  77. Hu, B.; Zhan, C.; Tang, B.; Wang, B.; Lei, B.; Wang, S.Q. 3-D brain reconstruction by hierarchical shape-perception network from a single incomplete image. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 13271–13283. [Google Scholar] [CrossRef] [PubMed]
  78. Zhou, Y.; Yang, Z.; Zhang, H.; Eric, I.; Chang, C.; Fan, Y.; Xu, Y. 3D segmentation guided style-based generative adversarial networks for pet synthesis. IEEE Trans. Med. Imaging 2022, 41, 2092–2104. [Google Scholar] [CrossRef] [PubMed]
  79. Rezaei, S.R.; Ahmadi, A. A GAN-based method for 3D lung tumor reconstruction boosted by a knowledge transfer approach. Multimed. Tools Appl. 2023, 82, 44359–44385. [Google Scholar] [CrossRef]
  80. Elloumi, N.; Seddik, H. The ideal PGGAN for the 3D medical data Segmenting. In Proceedings of the 2024 IEEE 7th International Conference on Advanced Technologies, Signal and Image Processing (ATSIP), Sousse, Tunisia, 11–13 July 2024; IEEE: New York, NY, USA, 2024; Volume 1, pp. 665–675. [Google Scholar]
  81. Safari, M.; Fatemi, A.; Archambault, L. MedFusionGAN: Multimodal medical image fusion using an unsupervised deep generative adversarial network. BMC Med. Imaging 2023, 23, 203. [Google Scholar] [CrossRef]
  82. Subramaniam, P.; Kossen, T.; Ritter, K.; Hennemuth, A.; Hildebrand, K.; Hilbert, A.; Sobesky, J.; Livne, M.; Galinovic, I.; Khalil, A.A.; et al. Generating 3D TOF-MRA volumes and segmentation labels using generative adversarial networks. Med. Image Anal. 2022, 78, 102396. [Google Scholar] [CrossRef]
  83. Zi, Y.; Wang, Q.; Gao, Z.; Cheng, X.; Mei, T. Research on the application of deep learning in medical image segmentation and 3d reconstruction. Acad. J. Sci. Technol. 2024, 10, 8–12. [Google Scholar] [CrossRef]
  84. Tudosiu, P.D.; Pinaya, W.H.L.; Graham, M.S.; Borges, P.; Fernandez, V.; Yang, D.; Appleyard, J.; Novati, G.; Mehra, D.; Vella, M.; et al. Morphology-preserving autoregressive 3d generative modelling of the brain. In Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, Singapore, 18 September 2022; Springer: Cham, Switzerland, 2022; pp. 66–78. [Google Scholar]
  85. Bui, N.T.; Hoang, D.H.; Tran, M.T.; Doretto, G.; Adjeroh, D.; Patel, B.; Choudhary, A.; Le, N. Sam3d: Segment anything model in volumetric medical images. In Proceedings of the 2024 IEEE International Symposium on Biomedical Imaging (ISBI), Athens, Greece, 27–30 May 2024; IEEE: New York, NY, USA, 2024; pp. 1–4. [Google Scholar]
  86. Tyagi, S.; Talbar, S.N. CSE-GAN: A 3D conditional generative adversarial network with concurrent squeeze-and-excitation blocks for lung nodule segmentation. Comput. Biol. Med. 2022, 147, 105781. [Google Scholar] [CrossRef]
  87. Poonkodi, S.; Kanchana, M. 3D-MedTranCSGAN: 3D medical image transformation using CSGAN. Comput. Biol. Med. 2023, 153, 106541. [Google Scholar] [CrossRef]
  88. Jung, E.; Luna, M.; Park, S.H. Conditional GAN with 3D discriminator for MRI generation of Alzheimer’s disease progression. Pattern Recognit. 2023, 133, 109061. [Google Scholar] [CrossRef]
  89. Ge, R.; Shi, F.; Chen, Y.; Tang, S.; Zhang, H.; Lou, X.; Zhao, W.; Coatrieux, G.; Gao, D.; Li, S.; et al. Improving anisotropy resolution of computed tomography and annotation using 3D super-resolution network. Biomed. Signal Process. Control 2023, 82, 104590. [Google Scholar] [CrossRef]
  90. Aydin, O.U.; Hilbert, A.; Koch, A.; Lohrke, F.; Rieger, J.; Tanioka, S.; Frey, D. Generative Modeling of the Circle of Willis Using 3D-StyleGAN. NeuroImage 2024, 304, 120936. [Google Scholar] [CrossRef]
  91. King, S.; Hollenbenders, Y.; Reichenbach, A. Efficient synthesis of 3D MR images for schizophrenia diagnosis classification with generative adversarial networks. Comput. Methods Programs Biomed. Update 2024, 8, 100197. [Google Scholar] [CrossRef]
  92. Zhou, M.; Wagner, M.W.; Tabori, U.; Hawkins, C.; Ertl-Wagner, B.B.; Khalvati, F. Generating 3D brain tumor regions in MRI using vector-quantization Generative Adversarial Networks. Comput. Biol. Med. 2025, 185, 109502. [Google Scholar] [CrossRef]
  93. Liu, M.; Shao, X.; Jiang, L.; Wu, K. 3D EAGAN: 3D edge-aware attention generative adversarial network for prostate segmentation in transrectal ultrasound images. Quant. Imaging Med. Surg. 2024, 14, 4067. [Google Scholar] [CrossRef]
  94. Zhou, M.; Khalvati, F. Conditional Generation of 3D Brain Tumor Regions via VQGAN and Temporal-Agnostic Masked Transformer. In Proceedings of the Medical Imaging with Deep Learning, Paris, France, 3–5 July 2024. [Google Scholar]
  95. Corona-Figueroa, A.; Shum, H.P.; Willcocks, C.G. Repeat and Concatenate: 2D to 3D Image Translation with 3D to 3D Generative Modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 2315–2324. [Google Scholar]
  96. Çelik, G.; Talu, M.F. A new 3D MRI segmentation method based on Generative Adversarial Network and Atrous Convolution. Biomed. Signal Process. Control 2022, 71, 103155. [Google Scholar] [CrossRef]
  97. Kim, J.; Li, Y.; Shin, B.S. Volumetric Imitation Generative Adversarial Networks for Anatomical Human Body Modeling. Bioengineering 2024, 11, 163. [Google Scholar] [CrossRef] [PubMed]
  98. Sun, B.; Jia, S.; Jiang, X.; Jia, F. Double U-Net CycleGAN for 3D MR to CT image synthesis. Int. J. Comput. Assist. Radiol. Surg. 2023, 18, 149–156. [Google Scholar] [CrossRef] [PubMed]
  99. Mensing, D.; Hirsch, J.; Wenzel, M.; Günther, M. 3D (c) GAN for whole body MR synthesis. In Proceedings of the MICCAI Workshop on Deep Generative Models, Singapore, 22 September 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 97–105. [Google Scholar]
  100. Vagni, M.; Tran, H.E.; Romano, A.; Chiloiro, G.; Boldrini, L.; Zormpas-Petridis, K.; Kawula, M.; Landry, G.; Kurz, C.; Corradini, S.; et al. Auto-segmentation of pelvic organs at risk on 0.35 T MRI using 2D and 3D Generative Adversarial Network models. Phys. Medica 2024, 119, 103297. [Google Scholar] [CrossRef]
  101. Kanakatte, A.; Bhatia, D.; Ghose, A. 3D cardiac substructures segmentation from CMRI using generative adversarial network (GAN). In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, Scotland, UK, 11–15 July 2022; IEEE: New York, NY, USA, 2022; pp. 1698–1701. [Google Scholar]
  102. Tiago, C.; Gilbert, A.; Beela, A.S.; Aase, S.A.; Snare, S.R.; Šprem, J.; McLeod, K. A data augmentation pipeline to generate synthetic labeled datasets of 3D echocardiography images using a GAN. IEEE Access 2022, 10, 98803–98815. [Google Scholar] [CrossRef]
  103. Elloumi, N.; Mbarki, Z.; Seddik, H. 3D medical images segmentation and securing based GAN architecture and watermarking algorithm using schur decomposition. In Proceedings of the 2023 IEEE Afro-Mediterranean Conference on Artificial Intelligence (AMCAI), Hammamet, Tunisia, 13-15 December 2023; IEEE: New York, NY, USA, 2023; pp. 1–8. [Google Scholar]
  104. Sharaby, I.; Alksas, A.; Balaha, H.M.; Mahmoud, A.; Badawy, M.; Abou El-Ghar, M.; Khalil, A.; Ghazal, M.; Contractor, S.; El-Baz, A. A Novel Approach for 3D Renal Segmentation Using a Modified GAN Model and Texture Analysis. In Proceedings of the 2024 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 27–30 October 2024; IEEE: New York, NY, USA, 2024; pp. 3151–3157. [Google Scholar]
  105. Sun, M.; Li, X.; Sun, W. Image Generation and Lesion Segmentation of Brain Tumors and Stroke Based on GAN and 3D ResU-Net. IEEE Access 2024, 13, 125629–125644. [Google Scholar] [CrossRef]
  106. Chithra, P.L.; Dhivya, S. 3D MRI Image Synthesizing using Commix GAN. In Proceedings of the 2024 International Conference on Trends in Quantum Computing and Emerging Business Technologies, Pune, India, 22–23 March 2024; IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar]
  107. Gao, Y.; Tang, H.; Ge, R.; Liu, J.; Chen, X.; Xi, Y.; Ji, X.; Shu, H.; Zhu, J.; Coatrieux, G.; et al. 3DSRNet: 3D Spine Reconstruction Network Using 2D Orthogonal X-ray Images Based on Deep Learning. IEEE Trans. Instrum. Meas. 2023, 72, 4506214. [Google Scholar] [CrossRef]
  108. Kermi, A.; Behaz, M.K.N.; Benamar, A.; Khadir, M.T. A Deep Learning-based 3D-GAN for Glioma Subregions Detection and Segmentation in Multimodal Brain MRI volumes. In Proceedings of the 2022 International Symposium on iNnovative Informatics of Biskra (ISNIB), Biskra, Algeria, 7–8 December 2022; IEEE: New York, NY, USA, 2022; pp. 1–7. [Google Scholar]
  109. Xue, Y.; Peng, Y.; Bi, L.; Feng, D.; Kim, J. CG-3DSRGAN: A classification guided 3D generative adversarial network for image quality recovery from low-dose PET images. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 24–27 July 2023; IEEE: New York, NY, USA, 2023; pp. 1–4. [Google Scholar]
  110. Zhang, X.; He, X.; Guo, J.; Ettehadi, N.; Aw, N.; Semanek, D.; Posner, J.; Laine, A.; Wang, Y. PTNet3D: A 3D high-resolution longitudinal infant brain MRI synthesizer based on transformers. IEEE Trans. Med. Imaging 2022, 41, 2925–2940. [Google Scholar] [CrossRef] [PubMed]
  111. Pradhan, N.; Dhaka, V.S.; Rani, G.; Pradhan, V.; Vocaturo, E.; Zumpano, E. Conditional generative adversarial network model for conversion of 2 dimensional radiographs into 3 dimensional views. IEEE Access 2023, 11, 96283–96296. [Google Scholar] [CrossRef]
  112. Xia, M.; Yang, H.; Huang, Y.; Qu, Y.; Guo, Y.; Zhou, G.; Zhang, F.; Wang, Y. AwCPM-Net: A collaborative constraint GAN for 3D coronary artery reconstruction in intravascular ultrasound sequences. IEEE J. Biomed. Health Inform. 2022, 26, 3047–3058. [Google Scholar] [CrossRef]
  113. He, R.; Xu, S.; Liu, Y.; Li, Q.; Liu, Y.; Zhao, N.; Yuan, Y.; Zhang, H. Three-dimensional liver image segmentation using generative adversarial networks based on feature restoration. Front. Med. 2022, 8, 794969. [Google Scholar] [CrossRef]
  114. Joseph, J.; Pournami, P.; Jayaraj, P. Supervised fan beam computed tomography image synthesis using 3D CycleGAN. In Proceedings of the 2022 IEEE International Conference on Signal Processing, Informatics, Communication and Energy Systems (SPICES), Thiruvananthapuram, India, 10–12 March 2022; IEEE: New York, NY, USA, 2022; Volume 1, pp. 81–86. [Google Scholar]
  115. Dong, Y.; Yang, F.; Wen, J.; Cai, J.; Zeng, F.; Liu, M.; Li, S.; Wang, J.; Ford, J.C.; Portelance, L.; et al. Improvement of 2D cine image quality using 3D priors and cycle generative adversarial network for low field MRI-guided radiation therapy. Med. Phys. 2024, 51, 3495–3509. [Google Scholar] [CrossRef]
  116. Zhang, K.; Hu, H.; Philbrick, K.; Conte, G.M.; Sobek, J.D.; Rouzrokh, P.; Erickson, B.J. SOUP-GAN: Super-resolution MRI using generative adversarial networks. Tomography 2022, 8, 905–919. [Google Scholar] [CrossRef]
  117. Amran, D.; Artzi, M.; Aizenstein, O.; Ben Bashat, D.; Bermano, A.H. BV-GAN: 3D time-of-flight magnetic resonance angiography cerebrovascular vessel segmentation using adversarial CNNs. J. Med. Imaging 2022, 9, 044503. [Google Scholar] [CrossRef]
  118. Wang, Y.; Wu, W.; Yang, Y.; Hu, H.; Yu, S.; Dong, X.; Chen, F.; Liu, Q. Deep learning-based 3D MRI contrast-enhanced synthesis from a 2D noncontrast T2Flair sequence. Med. Phys. 2022, 49, 4478–4493. [Google Scholar] [CrossRef]
  119. Zhang, Q.; Hang, Y.; Wu, F.; Wang, S.; Hong, Y. Super-resolution of 3D medical images by generative adversarial networks with long and short-term memory and attention. Sci. Rep. 2025, 15, 20828. [Google Scholar] [CrossRef] [PubMed]
  120. Xing, X.; Li, X.; Wei, C.; Zhang, Z.; Liu, O.; Xie, S.; Chen, H.; Quan, S.; Wang, C.; Yang, X.; et al. DP-GAN+ B: A lightweight generative adversarial network based on depthwise separable convolutions for generating CT volumes. Comput. Biol. Med. 2024, 174, 108393. [Google Scholar] [CrossRef] [PubMed]
  121. Fujita, A.; Goto, K.; Ueda, A.; Kuroda, Y.; Kawai, T.; Okuzu, Y.; Okuno, Y.; Matsuda, S. Measurement of the acetabular cup orientation after total hip arthroplasty based on 3-dimensional reconstruction from a single X-ray image using generative adversarial networks. J. Arthroplast. 2025, 40, 136–143. [Google Scholar] [CrossRef] [PubMed]
  122. Touati, R.; Le, W.T.; Kadoury, S. Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis. Phys. Med. Biol. 2024, 69, 155012. [Google Scholar] [CrossRef]
  123. Chen, Z.; Jiang, M.; Chiu, B. Unsupervised shape-and-texture-based generative adversarial tuning of pre-trained networks for carotid segmentation from 3D ultrasound images. Med. Phys. 2024, 51, 7240–7256. [Google Scholar] [CrossRef]
  124. Bazangani, F.; Richard, F.J.; Ghattas, B.; Guedj, E. FDG-PET to T1 weighted MRI translation with 3D elicit generative adversarial network (E-GAN). Sensors 2022, 22, 4640. [Google Scholar] [CrossRef]
  125. Lin, J.; Li, Z.; Zeng, Y.; Liu, X.; Li, L.; Jahanshad, N.; Ge, X.; Zhang, D.; Lu, M.; Liu, M.; et al. Harmonizing three-dimensional MRI using pseudo-warping field guided GAN. NeuroImage 2024, 295, 120635. [Google Scholar] [CrossRef]
  126. Grover, V.P.; Tognarelli, J.M.; Crossey, M.M.; Cox, I.J.; Taylor-Robinson, S.D.; McPhail, M.J. Magnetic resonance imaging: Principles and techniques: Lessons for clinicians. J. Clin. Exp. Hepatol. 2015, 5, 246–255. [Google Scholar] [CrossRef]
  127. Gholipour, A.; Afacan, O.; Aganj, I.; Scherrer, B.; Prabhu, S.P.; Sahin, M.; Warfield, S.K. Super-resolution reconstruction in frequency, image, and wavelet domains to reduce through-plane partial voluming in MRI. Med. Phys. 2015, 42, 6919–6932. [Google Scholar] [CrossRef]
  128. Wang, L.; Zhu, H.; He, Z.; Jia, Y.; Du, J. Adjacent slices feature transformer network for single anisotropic 3D brain MRI image super-resolution. Biomed. Signal Process. Control 2022, 72, 103339. [Google Scholar] [CrossRef]
  129. Power, S.P.; Moloney, F.; Twomey, M.; James, K.; O’Connor, O.J.; Maher, M.M. Computed tomography and patient risk: Facts, perceptions and uncertainties. World J. Radiol. 2016, 8, 902. [Google Scholar] [CrossRef] [PubMed]
  130. Zingarini, G.; Cozzolino, D.; Corvi, R.; Poggi, G.; Verdoliva, L. M3Dsynth: A dataset of medical 3D images with AI-generated local manipulations. In Proceedings of the ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea, 14–19 April 2024; IEEE: New York, NY, USA, 2024; pp. 13176–13180. [Google Scholar]
  131. Wang, Y.; Xiong, H.; Sun, K.; Bai, S.; Dai, L.; Ding, Z.; Liu, J.; Wang, Q.; Liu, Q.; Shen, D. Toward general text-guided multimodal brain MRI synthesis for diagnosis and medical image analysis. Cell Rep. Med. 2025, 6, 102182. [Google Scholar] [CrossRef] [PubMed]
  132. Yuan, S.; Chen, X.; Liu, Y.; Zhu, J.; Men, K.; Dai, J. Comprehensive evaluation of similarity between synthetic and real CT images for nasopharyngeal carcinoma. Radiat. Oncol. 2023, 18, 182. [Google Scholar] [CrossRef] [PubMed]
  133. Dishner, K.A.; McRae-Posani, B.; Bhowmik, A.; Jochelson, M.S.; Holodny, A.; Pinker, K.; Eskreis-Winkler, S.; Stember, J.N. A survey of publicly available MRI datasets for potential use in artificial intelligence research. J. Magn. Reson. Imaging 2024, 59, 450–480. [Google Scholar] [CrossRef]
  134. Paudyal, R.; Shah, A.D.; Akin, O.; Do, R.K.; Konar, A.S.; Hatzoglou, V.; Mahmood, U.; Lee, N.; Wong, R.J.; Banerjee, S.; et al. Artificial intelligence in CT and MR imaging for oncological applications. Cancers 2023, 15, 2573. [Google Scholar] [CrossRef]
  135. Islam, J.; Zhang, Y. GAN-based synthetic brain PET image generation. Brain Inform. 2020, 7, 3. [Google Scholar] [CrossRef]
  136. Hirte, A.U.; Platscher, M.; Joyce, T.; Heit, J.J.; Tranvinh, E.; Federau, C. Realistic generation of diffusion-weighted magnetic resonance brain images with deep generative models. Magn. Reson. Imaging 2021, 81, 60–66. [Google Scholar] [CrossRef]
  137. Barile, B.; Marzullo, A.; Stamile, C.; Durand-Dubief, F.; Sappey-Marinier, D. Data augmentation using generative adversarial neural networks on brain structural connectivity in multiple sclerosis. Comput. Methods Programs Biomed. 2021, 206, 106113. [Google Scholar] [CrossRef]
  138. Wolterink, J.M.; Mukhopadhyay, A.; Leiner, T.; Vogl, T.J.; Bucher, A.M.; Išgum, I. Generative adversarial networks: A primer for radiologists. Radiographics 2021, 41, 840–857. [Google Scholar] [CrossRef]
  139. Yuan, W.; Wei, J.; Wang, J.; Ma, Q.; Tasdizen, T. Unified generative adversarial networks for multimodal segmentation from unpaired 3D medical images. Med. Image Anal. 2020, 64, 101731. [Google Scholar] [CrossRef]
  140. Armanious, K.; Hepp, T.; Küstner, T.; Dittmann, H.; Nikolaou, K.; La Fougère, C.; Yang, B.; Gatidis, S. Independent attenuation correction of whole body [18 F] FDG-PET using a deep learning approach with Generative Adversarial Networks. EJNMMI Res. 2020, 10, 1–9. [Google Scholar] [CrossRef] [PubMed]
  141. Li, Z.; Zhou, S.; Huang, J.; Yu, L.; Jin, M. Investigation of low-dose CT image denoising using unpaired deep learning methods. IEEE Trans. Radiat. Plasma Med. Sci. 2020, 5, 224–234. [Google Scholar] [CrossRef] [PubMed]
  142. Clement David-Olawade, A.; Olawade, D.B.; Vanderbloemen, L.; Rotifa, O.B.; Fidelis, S.C.; Egbon, E.; Akpan, A.O.; Adeleke, S.; Ghose, A.; Boussios, S. AI-Driven Advances in Low-Dose Imaging and Enhancement—A Review. Diagnostics 2025, 15, 689. [Google Scholar] [CrossRef] [PubMed]
  143. Kim, J.; Kim, J.; Han, G.; Rim, C.; Jo, H. Low-dose CT Image Restoration using generative adversarial networks. Inform. Med. Unlocked 2020, 21, 100468. [Google Scholar] [CrossRef]
  144. Alqushaibi, A.; Hasan, M.H.; Abdulkadir, S.J.; Danyaro, K.U.; Ragab, M.G.; Al-Selwi, S.M.; Sumiea, E.H.; Alhussian, H. Enhanced Colon Cancer Segmentation and Image Synthesis through Advanced Generative Adversarial Networks based-Sine Cosine Algorithm. IEEE Access 2024, 12, 105354–105369. [Google Scholar] [CrossRef]
  145. Abdollahi, M.; Davoudi, H.; Ebrahimi, M. Combined Medical Image Super-Resolution and Modality Translation Using GAN Transformer-Based Model. In Proceedings of the 2023 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 13–15 December 2023; IEEE: New York, NY, USA, 2023; pp. 1133–1138. [Google Scholar]
  146. Liu, J.; Yang, Y.; Ai, Y.; Kitrungrotsakul, T.; Wang, F.; Lin, L.; Tong, R.; Chen, Y.W.; Li, J. MVI-Wise GAN: Synthetic MRI to Improve Microvascular Invasion Prediction in Hepatocellular Carcinoma. In Proceedings of the 2023 45th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Sydney, Australia, 24–27 July 2023; IEEE: New York, NY, USA, 2023; pp. 1–4. [Google Scholar]
  147. Tripathi, R.P.; Khatri, S.K.; Van Greunen, D.; Ather, D. Enhancing Breast Cancer Diagnosis Through Segmentation-Driven Generative Adversarial Networks for Synthetic Mammogram Generation. In Proceedings of the 2023 3rd International Conference on Technological Advancements in Computational Sciences (ICTACS), Tashkent, Uzbekistan, 1–3 November 2023; IEEE: New York, NY, USA, 2023; pp. 1078–1082. [Google Scholar]
  148. Yu, Y.F.; Zhong, G.; Zhou, Y.; Chen, L. FS-GAN: Fuzzy Self-guided structure retention generative adversarial network for medical image enhancement. Inf. Sci. 2023, 642, 119114. [Google Scholar] [CrossRef]
  149. Onakpojeruo, E.P.; Mustapha, M.T.; Ozsahin, D.U.; Ozsahin, I. A Comparative Analysis of the Novel Conditional Deep Convolutional Neural Network Model, Using Conditional Deep Convolutional Generative Adversarial Network-Generated Synthetic and Augmented Brain Tumor Datasets for Image Classification. Brain Sci. 2024, 14, 559. [Google Scholar] [CrossRef]
  150. Alauthman, M.; Al-Qerem, A.; Sowan, B.; Alsarhan, A.; Eshtay, M.; Aldweesh, A.; Aslam, N. Enhancing small medical dataset classification performance using GAN. Informatics 2023, 10, 28. [Google Scholar] [CrossRef]
  151. Sravani, M.; Aparna, S.; Sabarinath, J.; Kakarla, Y.; Shyam Ganesh, K.; Maddineni, S.; Aparna, S. Enhancing Brain Tumor Diagnosis with Generative Adversarial Networks. In Proceedings of the 2024 14th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 18–19 January 2024; IEEE: New York, NY, USA, 2024; pp. 846–851. [Google Scholar]
  152. Cirillo, M.D.; Abramian, D.; Eklund, A. Vox2Vox: 3D-GAN for brain tumour segmentation. In Proceedings of the Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, 4 October 2020; Revised Selected Papers, Part I 6. Springer: Cham, Switzerland, 2021; pp. 274–284. [Google Scholar]
  153. Rister, B.; Shivakumar, K.; Nobashi, T.; Rubin, D.L. CT-ORG: CT volumes with multiple organ segmentations [Dataset]. Cancer Imaging Arch. 2019, 21. [Google Scholar] [CrossRef]
  154. Van Essen, D.C.; Smith, S.M.; Barch, D.M.; Behrens, T.E.; Yacoub, E.; Ugurbil, K.; WU-Minn HCP Consortium. The WU-Minn human connectome project: An overview. Neuroimage 2013, 80, 62–79. [Google Scholar] [CrossRef]
  155. Regan, E.A.; Hokanson, J.E.; Murphy, J.R.; Make, B.; Lynch, D.A.; Beaty, T.H.; Curran-Everett, D.; Silverman, E.K.; Crapo, J.D. Genetic epidemiology of COPD (COPDGene) study design. COPD J. Chronic Obstr. Pulm. Dis. 2011, 7, 32–43. [Google Scholar] [CrossRef] [PubMed]
  156. Holmes, A.J.; Hollinshead, M.O.; O’keefe, T.M.; Petrov, V.I.; Fariello, G.R.; Wald, L.L.; Fischl, B.; Rosen, B.R.; Mair, R.W.; Roffman, J.L.; et al. Brain Genomics Superstruct Project initial data release with structural, functional, and behavioral measures. Sci. Data 2015, 2, 1–16. [Google Scholar] [CrossRef] [PubMed]
  157. Heller, N.; Sathianathen, N.; Kalapara, A.; Walczak, E.; Moore, K.; Kaluzniak, H.; Rosenberg, J.; Blake, P.; Rengel, Z.; Oestreich, M.; et al. The kits19 challenge data: 300 kidney tumor cases with clinical context, ct semantic segmentations, and surgical outcomes. arXiv 2019, arXiv:1904.00445. [Google Scholar]
  158. Peterfy, C.G.; Schneider, E.; Nevitt, M. The osteoarthritis initiative: Report on the design rationale for the magnetic resonance imaging protocol for the knee. Osteoarthr. Cartil. 2008, 16, 1433–1441. [Google Scholar] [CrossRef]
  159. Center for Artificial Intelligence in Medicine & Imaging. COCA: Coronary Calcium and Chest CTs. 2022. Available online: https://aimi.stanford.edu/datasets/coca-coronary-calcium-chest-ct (accessed on 2 October 2025).
  160. Jack, C.R., Jr.; Bernstein, M.A.; Fox, N.C.; Thompson, P.; Alexander, G.; Harvey, D.; Borowski, B.; Britson, P.J.; Whitwell, J.L.; Ward, C.; et al. The Alzheimer’s disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imaging Off. J. Int. Soc. Magn. Reson. Med. 2008, 27, 685–691. [Google Scholar] [CrossRef]
  161. Setio, A.A.A.; Traverso, A.; De Bel, T.; Berens, M.S.; Van Den Bogaard, C.; Cerello, P.; Chen, H.; Dou, Q.; Fantacci, M.E.; Geurts, B.; et al. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Med. Image Anal. 2017, 42, 1–13. [Google Scholar] [CrossRef]
  162. Shusharina, N.; Bortfeld, T. Glioma image segmentation for radiotherapy: Rt targets, barriers to cancer spread, and organs at risk [data set]. Cancer Imaging Arch. 2021. [Google Scholar] [CrossRef]
  163. Zhang, J.; Zhao, Y.; Saleh, M.; Liu, P.J. PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization. arXiv 2019, arXiv:1912.08777. [Google Scholar]
  164. Bernard, O.; Lalande, A.; Zotti, C.; Cervenansky, F.; Yang, X.; Heng, P.A.; Cetin, I.; Lekadir, K.; Camara, O.; Ballester, M.A.G.; et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE Trans. Med. Imaging 2018, 37, 2514–2525. [Google Scholar] [CrossRef]
  165. Bakas, S.; Akbari, H.; Sotiras, A.; Bilello, M.; Rozycki, M.; Kirby, J.S.; Freymann, J.B.; Farahani, K.; Davatzikos, C. Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 2017, 4, 1–13. [Google Scholar] [CrossRef]
  166. Bilic, P.; Christ, P.; Li, H.B.; Vorontsov, E.; Ben-Cohen, A.; Kaissis, G.; Szeskin, A.; Jacobs, C.; Mamani, G.E.H.; Chartrand, G.; et al. The liver tumor segmentation benchmark (lits). Med. Image Anal. 2023, 84, 102680. [Google Scholar] [CrossRef]
  167. Elliott, L.T.; Sharp, K.; Alfaro-Almagro, F.; Shi, S.; Miller, K.L.; Douaud, G.; Marchini, J.; Smith, S.M. Genome-wide association studies of brain imaging phenotypes in UK Biobank. Nature 2018, 562, 210–216. [Google Scholar] [CrossRef]
  168. Rusak, F.; Fonseca de Santa Cruz Oliveira, R.; Lebrat, L.; Hlinka, O.; Mejan-Fripp, J.; Smith, E.; Fookes, C.; Bradley, A.; Bourgeat, P. Synthetic Brain MRI Dataset for Testing of Cortical Thickness Estimation Methods. Data Collection. 2021. Available online: https://data.csiro.au/collection/csiro:53241v1?redirected=true (accessed on 2 October 2025).
  169. Depeursinge, A.; Vargas, A.; Platon, A.; Geissbuhler, A.; Poletti, P.A.; Müller, H. Building a reference multimedia database for interstitial lung diseases. Comput. Med. Imaging Graph. 2012, 36, 227–238. [Google Scholar] [CrossRef] [PubMed]
  170. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M.; et al. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository. J. Digit. Imaging 2013, 26, 1045–1057. [Google Scholar] [CrossRef] [PubMed]
  171. Moen, T.R.; Chen, B.; Holmes III, D.R.; Duan, X.; Yu, Z.; Yu, L.; Leng, S.; Fletcher, J.G.; McCollough, C.H. Low-dose CT image and projection dataset. Med. Phys. 2021, 48, 902–911. [Google Scholar] [CrossRef] [PubMed]
  172. Armato, S.G.; McLennan, G.; Bidaut, L.; McNitt-Gray, M.F.; Meyer, C.R.; Reeves, A.P.; Zhao, B.; Aberle, D.R.; Henschke, C.I.; Hoffman, E.A.; et al. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI). 2015. Available online: https://www.cancerimagingarchive.net/collection/lidc-idri/ (accessed on 14 September 2025).
  173. Martel, A.L.; Abolmaesumi, P.; Stoyanov, D.; Mateus, D.; Zuluaga, M.A.; Zhou, S.K.; Racoceanu, D.; Joskowicz, L. Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, 4–8 October 2020, Proceedings, Part I; Springer Nature: Cham, Switzerland, 2020; Volume 12261. [Google Scholar]
  174. Kirschke, J.S.; Löffler, M.; Sekuboyina, A.; Liebl, H. VerSe2020. 2022. Available online: https://osf.io/t98fz/ (accessed on 15 September 2025).
  175. Tahir, G.A. Ethical Challenges in Computer Vision: Ensuring Privacy and Mitigating Bias in Publicly Available Datasets. arXiv 2024, arXiv:2409.10533. [Google Scholar]
  176. Gu, C.; Gao, H. Combining GAN and LSTM Models for 3D Reconstruction of Lung Tumors from CT Scans. Int. J. Adv. Comput. Sci. Appl. 2023, 14. [Google Scholar] [CrossRef]
  177. Taha, A.A.; Hanbury, A. Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool. BMC Med. Imaging 2015, 15, 29. [Google Scholar] [CrossRef]
  178. Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar] [CrossRef]
  179. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; IEEE: New York, NY, USA, 2009; pp. 248–255. [Google Scholar]
  180. Skandarani, Y.; Jodoin, P.M.; Lalande, A. Gans for medical image synthesis: An empirical study. J. Imaging 2023, 9, 69. [Google Scholar] [CrossRef]
  181. Palubinskas, G. Image similarity/distance measures: What is really behind MSE and SSIM? Int. J. Image Data Fusion 2017, 8, 32–53. [Google Scholar] [CrossRef]
  182. Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved techniques for training gans. Adv. Neural Inf. Process. Syst. 2016, 29. [Google Scholar] [CrossRef]
  183. Deo, Y.; Jia, Y.; Lassila, T.; Smith, W.A.; Lawton, T.; Kang, S.; Frangi, A.F.; Habli, I. Metrics that matter: Evaluating image quality metrics for medical image generation. arXiv 2025, arXiv:2505.07175. [Google Scholar] [CrossRef]
  184. Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; IEEE: New York, NY, USA, 2018; pp. 586–595. [Google Scholar]
  185. Chuquicusma, M.J.; Hussein, S.; Burt, J.; Bagci, U. How to fool radiologists with generative adversarial networks? A visual turing test for lung cancer diagnosis. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: New York, NY, USA, 2018; pp. 240–244. [Google Scholar]
Figure 1. Venn diagram visualizing the hierarchical and overlapping relationships between AI, ML, DL, and Generative AI. AI represents the broad field of creating intelligent systems, while ML is a subset of AI that enables machines to learn from data. Within ML, DL is a more specific approach involving neural networks with multiple layers. Generative AI falls under DL and ML, focusing on models that can generate new data resembling the training data.
Figure 1. Venn diagram visualizing the hierarchical and overlapping relationships between AI, ML, DL, and Generative AI. AI represents the broad field of creating intelligent systems, while ML is a subset of AI that enables machines to learn from data. Within ML, DL is a more specific approach involving neural networks with multiple layers. Generative AI falls under DL and ML, focusing on models that can generate new data resembling the training data.
Applsci 15 11219 g001
Figure 2. Conceptual overview of the GAN architecture, illustrating the adversarial interplay between the generator and discriminator. Synthetic images are iteratively refined through the feedback loop. The generated and real images have been taken from [55].
Figure 2. Conceptual overview of the GAN architecture, illustrating the adversarial interplay between the generator and discriminator. Synthetic images are iteratively refined through the feedback loop. The generated and real images have been taken from [55].
Applsci 15 11219 g002
Figure 3. PRISMA diagram showing the literature review process, where 56 publications were selected from a total of 1530 collected from the five databases.
Figure 3. PRISMA diagram showing the literature review process, where 56 publications were selected from a total of 1530 collected from the five databases.
Applsci 15 11219 g003
Figure 4. Types and distribution of medical image modalities used in reviewed studies. The majority of the reviewed papers use MRI or CT due to their widespread clinical use and availability in the public datasets.
Figure 4. Types and distribution of medical image modalities used in reviewed studies. The majority of the reviewed papers use MRI or CT due to their widespread clinical use and availability in the public datasets.
Applsci 15 11219 g004
Figure 5. Distribution of medical applications in reviewed papers. The figure characterizes the primary research and clinical applications of GANs in 3D medical image analysis, as seen in the included research studies, showing that 3D image generation and 3D image segmentation are more prevalent applications.
Figure 5. Distribution of medical applications in reviewed papers. The figure characterizes the primary research and clinical applications of GANs in 3D medical image analysis, as seen in the included research studies, showing that 3D image generation and 3D image segmentation are more prevalent applications.
Applsci 15 11219 g005
Figure 6. Taxonomy diagram of GANs in 2D medical image analysis, organized by architecture type, clinical applications, training strategies, and image modalities used.
Figure 6. Taxonomy diagram of GANs in 2D medical image analysis, organized by architecture type, clinical applications, training strategies, and image modalities used.
Applsci 15 11219 g006
Figure 7. The distribution of reviewed publications on the use of GANs in 3D medical imaging applications characterized by clinical tasks and publication year. The distribution shows the total number of publications that are reviewed from each year, and then further divides them by the medical imaging task performed.
Figure 7. The distribution of reviewed publications on the use of GANs in 3D medical imaging applications characterized by clinical tasks and publication year. The distribution shows the total number of publications that are reviewed from each year, and then further divides them by the medical imaging task performed.
Applsci 15 11219 g007
Figure 8. The distribution of publications on GANs in 3D medical imaging organized by the datasets used.
Figure 8. The distribution of publications on GANs in 3D medical imaging organized by the datasets used.
Applsci 15 11219 g008
Figure 9. Distribution of image modalities used for each of the 3D medical imaging applications. The figure displays trends in modality–task alignment, offering insights into the current research trends.
Figure 9. Distribution of image modalities used for each of the 3D medical imaging applications. The figure displays trends in modality–task alignment, offering insights into the current research trends.
Applsci 15 11219 g009
Figure 10. Distribution of evaluation metrics across the reviewed GAN-based 3D medical imaging studies.
Figure 10. Distribution of evaluation metrics across the reviewed GAN-based 3D medical imaging studies.
Applsci 15 11219 g010
Table 1. Table depicting database and search strategy for study selection.
Table 1. Table depicting database and search strategy for study selection.
DatabaseSearch TagsArticles
IEEE(“Generative AI”) OR (“GANs”) AND (“3D Medical Imaging”) OR (“Three-Dimensional Imaging”) AND (2022–2025)167
Science Direct(“Generative AI”) OR (“GANs”) AND (“3D Medical Imaging”) OR (“Three-Dimensional Imaging”) AND (2022–2025)8
Google Scholar(“Generative AI”) OR (“GANs”) AND (“3D Medical Imaging”) OR (“Three-Dimensional Imaging”) AND (2022–2025)1186
Scopus(“Generative AI”) OR (“GANs”) AND (“3D Medical Imaging”) OR (“Three-Dimensional Imaging”) AND (2022–2025)147
PubMed(“Generative AI”) OR (“GANs”) AND (“3D Medical Imaging”) OR (“Three-Dimensional Imaging”) AND (2022–2025)22
Table 2. Risk of bias assessment across included studies. Each study was independently reviewed and assessed to ensure transparency and reproducibility. Assessed domains include (1) eligibility criteria of the studies, (2) identification and selection of studies, (3) data extraction and outcome evaluation, (4) results and interpretation of findings.
Table 2. Risk of bias assessment across included studies. Each study was independently reviewed and assessed to ensure transparency and reproducibility. Assessed domains include (1) eligibility criteria of the studies, (2) identification and selection of studies, (3) data extraction and outcome evaluation, (4) results and interpretation of findings.
Reference, YearDomain-1 Study Eligibility CriteriaDomain-2 Identification, Selection of StudiesDomain-3 Data Extraction, Outcome EvaluationDomain-4 Results, Interpretation of FindingsRisk of Bias in the Review
Kim et al., 2024 [73]LowLowLowLowLow
Sun et al., 2022 [74]LowLowLowLowLow
Prakash et al., 2024 [75]LowLowLowLowLow
Hwang et al., 2024 [76]LowHighUnclearLowHigh
Liu et al., 2023 [67]LowLowLowLowLow
Hu et al., 2023 [77]LowLowLowLowLow
Zhou et al., 2022 [78]LowLowLowLowLow
Rezaei et al., 2023 [79]LowLowLowLowLow
Elloumi et al., 2024 [80]LowLowLowLowLow
Safari et al., 2023 [81]LowLowLowLowLow
Subramaniam et al., 2022 [82]LowUnclearLowLowLow
Zi et al., 2024 [83]LowHighUnclearHighHigh
Tudosiu et al., 2022 [84]LowHighUnclearHighHigh
Bui et al., 2024 [85]LowHighLowHighHigh
Tyagi et al., 2022 [86]LowLowLowLowLow
Poonkodi et al., 2023 [87]LowLowLowLowLow
Jung et al., 2023 [88]LowLowLowLowLow
Ge et al., 2023 [89]LowLowLowLowLow
Aydin et al., 2024 [90]LowLowLowLowLow
King et al., 2024 [91]LowHighHighHighHigh
Zhou et al., 2025 [92]LowLowLowLowLow
Liu et al., 2024 [93]LowLowLowLowLow
Zhou et al., 2024 [94]LowHighLowLowHigh
Corona et al., 2024 [95]LowLowLowLowLow
Çelik et al., 2022 [96]LowLowLowLowLow
Kim et al., 2024 [97]LowLowLowLowLow
Sun et al., 2023 [98]LowHighLowLowHigh
Mensing et al., 2022 [99]LowHighUnclearLowHigh
Vagni et al., 2024 [100]LowHighLowLowHigh
Kanakatte et al., 2022 [101]LowLowLowLowLow
Tiago et al., 2022 [102]LowLowLowHighHigh
Elloumi et al., 2023 [103]LowLowLowLowLow
Sharaby et al., 2024 [104]LowLowLowHighHigh
Sun et al., 2024 [105]LowLowLowLowLow
Chithra et al., 2024 [106]LowLowLowLowLow
Gao et al., 2023 [107]LowLowLowLowLow
Kermi et al., 2022 [108]LowHighLowHighHigh
Xue et al., 2023 [109]LowLowLowLowLow
Zhang et al., 2022 [110]LowLowLowLowLow
Pradhan et al., 2023 [111]LowLowLowLowLow
Xia et al., 2022 [112]LowLowLowLowLow
He et al., 2022 [113]LowLowLowLowLow
Joseph et al., 2022 [114]LowLowLowUnclearUnclear
Dong et al., 2024 [115]LowLowLowLowLow
Zhang et al., 2022 [116]LowUnclearLowUnclearUnclear
Amran er al., 2022 [117]LowLowLowLowLow
Wang et al., 2022 [118]LowHighLowLowHigh
Zhang et al., 2025 [119]LowLowLowLowLow
Xing et al., 2024 [120]LowLowLowLowLow
Fujita et al., 2025 [121]LowUnclearLowUnclearUnclear
Touati et al., 2024 [122]LowLowLowLowLow
Chen et al., 2024 [123]LowUnclearLowHighHigh
Bazangani et al., 2022 [124]LowLowLowLowLow
Lin et al., 2024 [125]LowLowLowLowLow
Table 3. Table explaining how each divided section contributes to the aims of this survey.
Table 3. Table explaining how each divided section contributes to the aims of this survey.
SectionContribution to the aims of this Survey
Medical Image Modality (Section 3.1)Highlights the different image modalities used in the research papers (i.e., MRI, CT, PET, XRAY, TOF-MRA, ultrasound, echocardiography) for the training of GAN models. It explains the reasons for the most commonly used modalities and what challenges this may incur in the GAN models.
Medical Applications (Section 3.2)Helps organize which research publications are performing the specified medical application tasks—generation, segmentation, transformation, etc.
2D Medical Images (Section 3.3)Provides a brief insight into the GAN models used for 2D medical images. Though this is not the focus of this paper, an introduction is included to help familiarize readers with the models that may have laid the foundation for future research in 3D medical imaging research studies.
3D Medical Images (Section 3.4)Highlights the ongoing research on GANs in 3D medical imaging, specifically from the years 2022–2025. This time frame is selected to focus on the most recent advancements only. This section focuses on the methods used, their performance, and any limitations.
Public Datasets (Section 3.5)Guides the researchers towards publicly accessible datasets, indicating which research publications have employed them, for their easier access. It also mentions the datasets’ modalities, target organs, and the number of subjects included in each.
Code Availability (Section 3.6)Helps the researchers in exploring the studies that offer hands-on inspection of their codes for future work, by providing accessible links to the available codes that can help speed up their research process in terms of reproducibility and transparency.
Evaluation Metrics (Section 3.7)Provides details of which metrics have been used in the studied researches.
Table 4. Research publications organized according to the medical applications performed. This representation enables a clear understanding of how the GAN-based approaches are being applied in practice across the various clinical tasks.
Table 4. Research publications organized according to the medical applications performed. This representation enables a clear understanding of how the GAN-based approaches are being applied in practice across the various clinical tasks.
Medical ApplicationCitationTask Performed
3D image generationKim et al., 2024 [73]3D image generation using 3D-DGGAN.
Sun et al., 2022 [74]Generating high-resolution 3D images using HA-GAN.
Hwang et al., 2024 [76]Translate 2D X-ray images into 2D MRI and then reconstruct 3D knee MRI images.
Liu et al., 2023 [67]Generating 3D images using 3D Split-and-Shuffle-GAN.
Hu et al., 2023 [77]Generating 3D medical images using HSPN.
Rezaei et al., 2023 [79]Generating 3D models of lung tumors from 2D CT scans.
Safari et al., 2023 [81]Merge CT scans, which capture bone structures, with high-resolution 3D T1-Gd MRI, known for soft tissue contrast, to obtain 3D image.
Tudosiu et al., 2022 [84]3D brain image generation using VQ-VAE and Transformer.
Poonkodi et al., 2023 [87]Generating 3D images of lung using 3D-MedTranCSGAN.
Jung et al., 2023 [88]Generating 3D brain images using cGAN with 3D discriminator.
Aydin et al., 2024 [90]Generate synthetic TOF MRA volumes of the Brain (Circle of Willis).
King et al., 2024 [91]Generating 3D brain images using 3D DCGAN.
Zhou et al., 2025 [92]Generating 3D brain images using 3D VQGAN.
Zhou et al., 2024 [94]Generating 3D brain images using 3D-VQGAN-cond.
Corona et al., 2024 [95]Generating 3D images using Swin UNETR.
Kim et al., 2024 [97]Generating 3D images using VI-GAN.
Sun et al., 2023 [98]Generating 3D brain images using DU-CycleGAN.
Mensing et al., 2022 [99]Generating 3D images using GAN based on FastGAN.
Chithra et al., 2024 [106]Generating 3D brain MRI.
Gao et al., 2023 [107]3D spine reconstruction from 2D orthogonal X-ray images.
Xue et al., 2023 [109]3D image synthesis of high-quality PET images from low-dose PET images.
Zhang et al., 2022 [110]3D high-resolution Infant brain MRI synthesizing using PTNet3D.
Pradhan et al., 2023 [111]2D X-Ray image to 3D view of bones.
Xia et al., 2022 [112]3D coronary artery reconstruction (3D-CAR).
Wang et al., 2022 [118]3D image synthesis from 2D anisotropic non-contrast image.
Xing et al., 2024 [120]Synthesizing 3D CT from 2D lung X-rays.
Fujita et al., 2025 [121]Generation of 3D CT images from X-ray images.
Touati et al., 2024 [122]Generation of 3D CT from MRI.
Bazangani et al., 2022 [124]Genertares 3D T1 weighted MRI from FDG-PET.
3D image generation
and segmentation
Prakash et al., 2024 [75]Improve 3D medical image reconstruction and segmentation using SculptorGAN with WP-UNet.
Subramaniam et al., 2022 [82]Generate realistic 3D TOF-MRA volumes along with segmentation labels.
Zi et al., 2024 [83]Medical image segmentation and 3D reconstruction.
Tiago et al., 2022 [102]To generate synthetic 3D echocardiography images along with their corresponding anatomical labels.
Sun et al., 2024 [105]Enhanced 3D brain image synthesis and segmentation.
3D image segmentationElloumi et al., 2024 [80]3D lung image segmentation using PGGAN.
Bui et al., 2024 [85]3D image segmentation using SAM3D.
Tyagi et al., 2022 [86]3D lung nodule segmentation using CSE-GAN.
Ge et al., 2023 [89]3D image segmentation using ASRGAN.
Liu et al., 2024 [93]3D prostrate segmentation using 3D EAGAN.
Çelik et al., 2022 [96]3D brain image segmentation using Vol2SegGAN.
Vagni et al., 2024 [100]3D imagw segmentation using Vox2Vox GAN.
Kanakatte et al., 2022 [101]3D cMRI segmentation.
Elloumi et al., 2023 [103]3D lung segmentation and patient data protection using Watermarking.
Sharaby et al., 2024 [104]3D renal segmentation using modified Pix2Pix GAN.
Kermi et al., 2022 [108]Segmentation of HGG and LGG gliomas sub-regions in 3D Brain MRI.
He et al., 2022 [113]3D liver segmentation by embedding 3D U-Net into DCGAN.
Amran et al., 2022 [117]blood vessel segmentation by embedding BV-GAN.
Chen et al., 2024 [123]Fine-tuning of pre-trained segmentation models to improve segmentation.
3D image transformationZhou et al., 2022 [78]High resolution 3D PET image synthesis form low dose PET image.
Joseph et al., 2022 [114]3D Cone Beam Computed Tomography (CBCT) to 3D Fan Beam Computed Tomography (FBCT) conversion.
3D image enhancementDong et al., 2024 [115]Image quality improvement for pancreatic cine images (MRI).
Zhang et al. 2022 [116]Producing super-resolution images using SOUP-GAN.
Zhang et al., 2025 [119]Super-resolution reconstruction of 3D medical image.
Lin et al., 2024 [125]Harmonization of 3D MRI.
Table 5. Literature survey of use of GANs in 3D medical imaging.
Table 5. Literature survey of use of GANs in 3D medical imaging.
StudyApplicationDatasetImage ModalityMethod and Performance
Kim et al., 2024 [73]3D medical image generationCT-ORG [153], HCP [154]Liver and spine CT, brain MRI3D-DGGAN exhibits the least performance degradation in MMD, FID, LPIPS, and PSNR compared to the existing methods.
Sun et al., 2022 [74]3D medical image generationCOPDGene [155], GSP [156]Thorax CT, brain MRIHA-GAN produces sharper images at a higher resolution of 2563, compared to other methods. Lower FID, MMD, and higher IS, indicating generation of more realistic images.
Prakash et al., 2024 [75]3D medical image reconstruction and segmentationKiTs19 [157]Kidney CTSculptorGAN with WP-UNet leads to 35% reduction in reconstruction time and 20% improvement in segmentation accuracy. Better Dice, Jaccard, Accuracy, Precision, Recall, and Hausdorff results as compared to classical 3D U-Net, ensuring computational efficiency, detailed feature extraction, and high-accuracy identification of renal tumors.
Hwang et al., 2024 [76]3D medical image generationOAI [158]X-ray, MRIGAN incorporating CutMix and GRAF translates 2D X-ray to 3D MRI with better PSNR, SSIM, and FID compared to AttentionGAN and MUNIT.
Liu et al., 2023 [67]3D medical image generationCOCA [159], ADNI [160]Heart CT, brain MRI3D Split-and-Shuffle-GAN outperforms other baseline methods significantly on FID, PSNR, and MS-SSIM. t-SNE shows a similar distribution to real images, confirming the generation of diverse, high-quality 3D medical images.
Hu et al., 2023 [77]3D medical image reconstructionIn-house datasetBrain MRIHSPN gives real-time feedback, and outperforms other models in terms of visual quality, quantitative analysis, and classification performance, as evaluated by CD and PC-to-PC error.
Zhou et al., 2022 [78]3D medical image generationIn-house datasetLiver, brain, kidney, bladder PETSGSGAN achieved SOTA performance comparable to other methods in terms of PSNR, SSIM, MAE, and U-Net score.
Rezaei et al., 2023 [79]3D medical image reconstructionLUNA16 [161]Lung CTGAN employed in three stages: lung segmentation, tumor segmentation, and 3D lung tumor reconstruction, outperforming other SOTA techniques, in terms of HD and ED.
Elloumi et al., 2024 [80]3D medical image segmentationIn-house datasetLung CTPGGAN combined with VGG 16+U-Net and ResNet 50+U-Net achieves 99.48% validation accuracy.
Safari et al., 2023 [81]3D medical image generationGLIS-RT [162]CT and 3D T1-Gd MRIMedFusionGAN outperforms seven traditional and eight DL methods.
Subramaniam et al., 2022 [82]3D medical image generation with segmentation labelsPEGASUS [163], 1000Plus3D TOF-MRAFour variants of 3D Wasserstein GANs (WGAN), including gradient penalty (GP), spectral normalization (SN), and mixed precision models (SN-MP and c-SN-MP), show lowest FID scores and optimal PRD curves.
Zi et al., 2024 [83]Medical image segmentation and 3D reconstructionACDC [164], BraTS [165], LiTS [166]Cardiac MRI, brain MRI, liver CTThis model achieved strong results, with Dice, IoU for segmentation, and MSE, SSIM for 3D reconstruction, indicating accurate reconstruction of anatomical structures with preserved details.
Tudosiu et al., 2022 [84]3D medical image generationUKB [167], ADNI [160]Brain MRISignificantly outperforms by generating realistic brain images in terms of MS-SSIM.
Bui et al., 2024 [85]3D medical image segmentationSynapse, ACDC [164], BraTS [165], and LTS [168]Multi-organ CT, brain MRISAM3D shows competitive performance DSC score improvement compared to other SAM-based methods.
Tyagi et al., 2022 [86]3D medical image segmentationLUNA16 [161], ILND [169]Lung CTCSE-GAN outperforms other U-Net, R2UNet models. Two datasets were used, where the model achieved significant performance on a completely different dataset, proving its generalizability.
Poonkodi et al., 2023 [87]3D medical image transformationTCIA [170]Lung PET, CT, and MRI images3D-MedTranCSGAN performs multiple tasks without modifying its core design, such as transforming PET to CT, reconstructing CT to PET, correcting motion artifacts in MRI, and denoising PET images.
Jung et al., 2023 [88]3D medical image generationADNI [160]Brain MRIcGAN with 3D discriminator outperformed CAAE, AttGAN, StarGAN, and GANimation in terms of image quality, condition generation accuracy, and efficiency, with better FID and KID scores.
Ge et al., 2023 [89]3D medical image segmentationLDCT [171], in-houseLiver CT, pancreas CTASRGAN significantly improves reconstruction performance with 2.42 dB improvement in PSNR and boosts segmentation accuracy for liver tumors and pancreas, demonstrating strong generalization across different CT scanner models without requiring extra retraining, outperforming other methods.
Aydin et al., 2024 [90]3D medical image generationIXI, Lausanne, NITRC, CASILab, ICBM, OASIS 3, TopCowBrain TOF MRAStyleGANv2 generated realistic and diverse TOF MRA volumes of CoW when analyzed visually, with significant performance in FID, MD, and AUC-PRD.
King et al., 2024 [91]3D medical image generationMCICBrain sMRI α -SN-GAN produces synthetic images with the highest level of quality and variety, as demonstrated through both visual assessments and numerical evaluations, also raising the classifier accuracy from 61% to 79%.
Zhou et al., 2025 [92]3D medical image generationBRaTS 2019 [165], in-houseBrain MRI3D-VQGAN generates synthetic data that can be directly used in tumor classification tasks, validating the superiority of this method, surpassing baseline models in AUC, F1-score, and accuracy.
Liu et al., 2024 [93]3D medical image segmentationTRUS, µRegProTransrectal ultrasound3D EAGAN significantly improved performance metrics compared to SOTA segmentation methods in terms of Dice, Jaccard, HD, Precision, and Recall.
Zhou et al., 2024 [94]3D medical image generationBRaTS 2019 [165]Brain MRI3D-VQGAN-cond generated LGG and HGG ROIs for training a classification model confirms the improved ability to distinguish between LGG and HGG tumors, achieving better results in image quality metric: MS-SSIM, slice-wise FID, and MMD.
Corona et al., 2024 [95]3D medical image generationLIDC [172]Chest CTSwin UNETR compared to other models Swin UNETR produced the highest-quality outputs with better performance in SSIM, PSNR, MSE, and MAE.
Çelik et al., 2022 [96]3D medical image segmentationIBSR18, MRBRAINS13, MRBRAINS18Brain MRIVol2SegGAN performed best in segmenting cerebrospinal fluid, gray matter, and white matter based on Dice and VS.
Kim et al., 2024 [97]3D medical image generationCT-ORG [153], in-houseLiver CT, spine CTVI-GAN produced volumes that closely resembled the ground truth, outperforming other methods in terms of IoU, F1-score, and Dice.
Sun et al., 2023 [98]3D medical image generationABC’s MICCAI 2020 [173]Brain MRI and CT imagesDU-CycleGAN excels in both 2D and 3D image generation, with MAE, PSNR, and SSIM outperforming current SOTA methods.
Mensing et al., 2022 [99]3D medical image generationGNC 2014-2019Whole-body MRIModel outperforms 3D-StyleGAN in terms of MMD, FID, and MS-SSIM.
Vagni et al., 2024 [100]3D medical image segmentationIn-houseMRIVox2Vox GAN compared to the 3D U-Net achieved better performance in segmenting several organs in terms of DSC and HD.
Kanakatte et al., 2022 [101]3D medical image segmentationACDC [164]Cardiac short-axis MRIModel’s performance shows high accuracy in terms of Dice, especially when blind-tested on the M&Ms dataset, and matches the performance of 2D models for some classes by effectively incorporating 3D contextual information.
Tiago et al., 2022 [102]3D medical image generation and segmentationIn-houseHeart echocardiography imagesGAN-generated datasets are valuable for training deep learning models, like heart segmentation, providing a useful resource for cardiac imaging when real patient data is limited.
Elloumi et al., 2023 [103]3D medical image segmentationIn-houseLung CTPix2pix + DCGAN produce simulation results which demonstrate an effective combination of deep learning through GANs for medical image segmentation while simultaneously securing the images with an appropriate watermarking algorithm.
Sharaby et al., 2024 [104]3D medical image segmentationIn-houseKidney MRIModel demonstrates its effectiveness in renal diagnosis in terms of Dice and accuracy.
Sun et al., 2024 [105]3D medical image generation and segmentationBraTS2015, BraTS2018 [165], ISLES2015-SISSBrain tumor MRI, stroke MRIPer-CycleGAN-CACNN performed well in T1 to Flair image conversions in terms of PSNR, SSIM, RMSE. DualCMP-GAN-CACNN enhances the generated image quality and segmentation accuracy. DualCMP-GAN-3D ResU shows superior performance compared to using only real data, especially in segmentation of stroke lesions in terms of HD and Precision.
Chithra et al., 2024 [106]3D medical image generationBRaTS 2020 [165]Brain MRIDCGAN, Pix2Pix GAN, and WGAN combined with style transfer technique yielded the best accuracy.
Gao et al., 2023 [107]3D medical image generationVerSe’20, VerSe’19 [174]Spine CT scan3DSRNet using GAN shows significant performance in terms of PSNR, SSIM, CS, MAE, MSE, and LPIPS.
Kermi et al., 2022 [108]3D medical image segmentationBRaTS 2022 [165]3D Brain MRIModel shows significant performance in terms of Dice.
Xue et al., 2023 [109]3D medical image generationUDPETWhole-body PET imagesCG-3DSRGAN outperforms other methods by producing superior reconstruction results in terms of PSNR and NRMSE across various dose levels, particularly in accurately reconstructing brain structure and liver texture.
Zhang et al., 2022 [110]3D medical image generationdHCP, BCPInfant brain MRIPTNet3D showed superior synthesis accuracy and generalization compared to CNN-based GAN models, improving infant whole-brain segmentation for better accuracy and efficiency in MRI synthesis tasks.
Pradhan et al., 2023 [111]3D medical image generationin-houseX-Ray, CT of knee, elbow, lower limbModel can predict views from all angles (0° to 360°), providing a comprehensive 3D representation of bones and joints.
Xia et al., 2022 [112]3D medical image generationprivate datasetIntravascular ultrasoundAwCPM-Net outperforms existing CPR methods in capturing motion signals and cardiac phases and excels in detecting arterial wall structures better than current MBE techniques. The reconstructed 3D artery anatomy allows for accurate localization and assessment of vessel stenosis.
He et al., 2022 [113]3D medical image segmentationLiTS-2017 [166], KiTS19 [157]Liver CT, kidney CT3D U-Net + DCGAN shows improved segmentation performance and outperforms other methods in terms of Dice.
Joseph et al., 2022 [114]3D medical image translationIn-houseHead and neck paired CBCT-FBCT volumesCycleGAN performs well with pseudo-FBCT images closely resembling the real FBCT images in terms of PSNR, MSE, and SSIM.
Dong et al., 2024 [115]3D medical image enhancementIn-houseMRI pancreasDenoising CycleGAN and Enhancement CycleGAN former denoises the cine MRI images using the time domain cine image series, and the later enhances the spatial resolution and contrast of the image.
Zhang et al., 2022 [116]3D medical image enhancementIn-houseCT, MRISOUP-GAN for generating high resolution thin-slice images, with anti-aliasing and deblurring.
Amran et al., 2022 [117]3D medical image segmentationMIDAS, in-houseBrain TOF-MRABV-GAN for segmenting brain blood vessels.
Wang et al., 2022 [118]3D medical image generationIn-houseBrain MRIGAN used to synthesize 3D image from 2D anisotropic non-contrast image.
Zhang et al., 2025 [119]3D medical image enhancementBraTs2021, Luna16Brain MRI, lung CTLSTMAGAN for super-resolution reconstruction of 3D medical image.
Xing et al., 2024 [120]3D medical image generationLIDC-IDRILung CTDP-GAN+B to produce CT volumes from 2D frontal and lateral lung X-rays.
Fujita et al., 2025 [121]3D medical image generationIn-housePelvic radiographs and CT imagesCycleGAN and X2CT-GAN used to generate 3D CT images from X-ray images.
Touati et al., 2024 [122]3D medical image generationIn-housePaired T1-weighted MRI and CT scansDual CT-synthesis GAN synthesizes CT from T1-weighted MRI.
Chen et al., 2024 [123]3D medical image segmentationIn-houseUltrasoundUSTGAN to fine-tune pre-trained segmentation models, thereby improving segmentation.
Bazangani et al., 2022 [124]3D medical image generationADNIBrain PETE-GAN generates 3D T1 weighted MRI corresponding to FDG-PET.
Lin et al., 2024 [125]3D medical image enhancementADNI, UKBB, NKI-RSNrain MRIPseudo-warping field translation with GAN is used to harmonize 3D MRI.
Table 6. Public datasets used in the reviewed GAN-based 3D medical imaging research papers, organized by modality, clinical applications, and dataset attributes.
Table 6. Public datasets used in the reviewed GAN-based 3D medical imaging research papers, organized by modality, clinical applications, and dataset attributes.
DatasetFull Name of DatasetModalityOrganSubjectsCitationApplication
LUNALUng Nodule AnalysisCTLung888[79]3D image reconstruction
[86]3D image segmentation
ADNIAlzheimer’s Disease Neuroimaging InitiativeMRIBrain819[67,84,88]3D image generation
[67]3D image segmentation
COCACoronary CalciumCTHeart [67]3D image generation.
OAIOsteoarthritis InitiativeX-Ray, MRIKnee4796[76]3D image generation
KiTs19The 2019 Kidney and Kidney Tumor Segmentation ChallengeCTKidney210[75]3D image reconstruction and segmentation
[113]3D image segmentation
CT-ORGCT Volumes with Multiple Organ SegmentationsCTLiver, spine140[73,97]3D image generation
dHCPDevelopmental Human Connectome ProjectMRIInfant brain273[110]3D image generation
BCPBaby Connectome ProjectMRIInfant brain500[110]3D image generation
COPDGeneGenetic Epidemiology of Chronic Obstructive Pulmonary DiseaseCTThorax10,000[74]3D image generation
GSPGenomics Superstruct ProjectMRIBrain1570[74]3D image generation
LiTSLiver Tumor Segmentation ChallengeCTLiver130[83]3D image reconstruction and segmentation
[113]3D image segmentation
IBSR18Internet Brain Segmentation RepositoryMRIBrain18[96]3D image segmentation
ACDCAutomated Cardiac Diagnosis ChallengeMRICardiac150[83]3D image reconstruction and segmentation
[85,101]3D image segmentation
MICCAIMedical Image Computing and Computer-AssistedMRI, CTBrain-[98,101]3D image generation
BRaTSBrain Tumor Segmentation ChallengeMRIBrain2000[83,105]3D image reconstruction and segmentation
[85,108]3D image segmentation
[92,94,106]3D image generation
SynapseSynapse Multi-organ CTCTAbdomen multi-organ50[85]3D image segmentation
Lausanneobtained from Lausanne University Hospital (CHUV)TOF-MRABrain284[90]3D image generation
NITRCNeuroimaging Informatics Tools and Resources ClearinghouseTOF-MRABrain6845[90]3D image generation
CASILabCentre of Advanced Studies and Innovation LabTOF-MRABrain-[90]3D image generation
IXIInformation eXtraction from ImagesTOF-MRABrain600[90]3D image generation
ICBMInternational Consortium for Brain MappingTOF-MRABrain7000[90]3D image generation
OASIS 3Open Access Series of Imaging StudiesTOF-MRABrain1378[90]3D image generation
TopCowTopology-Aware Anatomical Segmentation of the Circle of Willis for CTA and MRATOF-MRABrain125[90]3D image generation
MCICMIND Clinical Imaging ConsortiumMRIBrain146[91]3D image generation
TRUSTransrectal ultrasoundUltrasoundProstrate6761[93]3D image segmentation
µRegProMIND Clinical Imaging ConsortiumUltrasoundProstrate141[93]3D image segmentation
LIDCLung Image Database ConsortiumCTChest1010[95]3D image generation
MRBrainSMR Brain SegmentationMRIBrainTraining: 5; Testing: 15[96]3D image segmentation
ISLES2015 -SISSIschemic Stroke Lesion Segmentation—Sub-acute Ischemic Stroke lesion SegmentationMRIBrain64[105]3D image generation and segmentation
VerSeInternet Brain Segmentation RepositoryCTSpine355[107]3D image generation
UDPETUltra-Low Dose challengePETWhole body800[109]3D image generation
UKBUK BiobankMRIBrain500,000[84]3D image generation
Table 7. Code availability across the reviewed GAN-based 3D medical imaging studies.
Table 7. Code availability across the reviewed GAN-based 3D medical imaging studies.
Model and CitationDatasetModalityOrganLink
PTNet3D [110]dHCP, BCPsMRIInfant brainhttps://github.com/XuzheZ/PTNet3D accessed on 2 October 2025
3D-VQGAN-cond [94]BRaTSMRIBrainhttps://github.com/IMICSLab/Brain_VQGAN_TATrans accessed on 2 October 2025
StyleGANv2 [90]IXI, Lausanne, NITRC, CASILab, ICBM, OASIS 3, TopCowTOF-MRABrainhttps://www.medrxiv.org/node/788185.external-links.html accessed on 2 October 2025
cGAN with 3D discriminator [88]ADNIMRIBrainhttps://github.com/EuijinMisp/ADE-synthesizer.T accessed on 2 October 2025
SAM3D [85]Synapse, ACDC, BraTSCT, MRILung, brainhttps://github.com/UARK-AICV/SAM3D accessed on 2 October 2025
3D-DGGAN [73]CT-ORG, HCPCT, MRILiver, spine, brainhttps://github.com/mskim99/3D-DGGAN/ accessed on 2 October 2025
HA-GAN [74]COPDGene, GSPCT, MRIThorax, brainhttps://github.com/batmanlab/HA-GAN accessed on 2 October 2025
WGAN, WGAN w/GP/SN/SN-MP/c-SN-MP [82]PEGASUS, 1000PlusTOF-MRABrainhttps://github.com/prediction2020/3DGAN_synthesis_of_3D_TOF_MRA_with_segmentation_labels accessed on 2 October 2025
SOUP-GAN [116]In-houseCT, MRIAbdomen, pelvis, brainhttps://github.com/Mayo-Radiology-Informatics-Lab/SOUP-GAN accessed on 2 October 2025
Pseudo-warping field translation with GAN [125]ADNI, UKBB, NKI-RSMRIBrainhttps://github.com/lx123-j/PWFHarmonization accessed on 2 October 2025
Table 8. A summary of the evaluation metrics used in the reviewed GAN-based 3D medical imaging studies.
Table 8. A summary of the evaluation metrics used in the reviewed GAN-based 3D medical imaging studies.
Evaluation MetricFull Name of MetricCitationYear
PSNRPeak signal-to-noise ratio[73]2024
[76]2024
[67]2023
[78]2022
[87]2023
[89]2023
[95]2024
[98]2023
[105]2024
[107]2023
[109]2023
[114]2022
[115]2024
[116]2022
[119]2025
MMDMaximum Mean Discrepancy[74]2022
[73]2024
[94]2024
[99]2022
FIDFréchet Inception Distance[74]2022
[73]2024
[76]2024
[67]2023
[82]2022
[88]2023
[90]2024
[94]2024
[99]2022
LPIPSLearned Perceptual Image Patch Similarity[73]2022
[107]2023
ISInception Score[74]2022
DSCDice Similarity Coefficient[75]2024
[113]2022
[85]2024
[86]2022
[93]2024
[96]2022
[97]2024
[100]2024
[101]2022
[104]2024
[108]2022
[115]2024
Jaccard-[75]2024
[86]2022
[93]2024
Accuracy-[75]2024
[80]2024
[91]2024
[92]2025
[104]2024
[106]2024
Precision-[75]2024
[86]2022
[93]2024
[105]2024
HDHausdorff Distance[75]2024
[79]2023
[93]2024
[96]2022
[100]2024
[105]2024
[115]2024
SSIMStructural Similarity Index Measure[76]2024
[78]2022
[83]2024
[87]2023
[95]2024
[98]2023
[105]2024
[107]2023
[114]2022
[115]2024
[116]2022
MS-SSIMMulti-Scale Structural Similarity Index Measure[67]2023
[84]2022
[94]2024
[99]2022
t-SNEt-distributed Stochastic Neighbor Embedding[67]2023
CDChamfer distance[77]2023
[115]2024
PC-to-PCPoint cloud-to-point cloud error[77]2023
MAEMean absolute error[78]2022
[95]2024
[98]2023
[107]2023
U-Net score-[78]2022
ED-[79]2023
MSEMean Squared Error[83]2024
[87]2023
[95]2024
[107]2023
[114]2022
Sensitivity-[86]2022
KIDKernel Inception Distance[88]2023
MDMean Diffusivity[90]2024
AUC-PRDArea Under the Precision-Recall Curve[90]2024
AUCArea Under the Curve[92]2025
F1-score-[92]2025
[97]2024
VSVolumetric Similarity[96]2022
IoUIntersection over Union[97]2024
RMSERoot Mean Squared Error[176]2024
CSCompressed Sensing[107]2023
NRMSENormalized Root Mean Squared Error[109]2023
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Usama, Z.; Alavi, A.; Chan, J. A Review on the Applications of GANs for 3D Medical Image Analysis. Appl. Sci. 2025, 15, 11219. https://doi.org/10.3390/app152011219

AMA Style

Usama Z, Alavi A, Chan J. A Review on the Applications of GANs for 3D Medical Image Analysis. Applied Sciences. 2025; 15(20):11219. https://doi.org/10.3390/app152011219

Chicago/Turabian Style

Usama, Zoha, Azadeh Alavi, and Jeffrey Chan. 2025. "A Review on the Applications of GANs for 3D Medical Image Analysis" Applied Sciences 15, no. 20: 11219. https://doi.org/10.3390/app152011219

APA Style

Usama, Z., Alavi, A., & Chan, J. (2025). A Review on the Applications of GANs for 3D Medical Image Analysis. Applied Sciences, 15(20), 11219. https://doi.org/10.3390/app152011219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop