Next Article in Journal
Design of a Nutraceutical Gummy Candy Incorporating Hydrolysed Hemp (Cannabis sativa L.) as an Antioxidant and Antihypertensive Ingredient
Next Article in Special Issue
A Review of Deep Learning Approaches Based on Segment Anything Model for Medical Image Segmentation
Previous Article in Journal
Evaluation of Long-Term Outcomes of Enamel Matrix Derivative in the Treatment of Peri-Implant Disease: A Systematic Review and Meta-Analysis
Previous Article in Special Issue
New Gait Representation Maps for Enhanced Recognition in Clinical Gait Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Deep Learning for CT Synthesis in Radiotherapy

by
Yike Guo
1,
Yi Luo
1,
Hamed Hooshangnejad
1,2,
Rui Zhang
3,
Xue Feng
4,
Quan Chen
5,
Wilfred Ngwa
2 and
Kai Ding
2,*
1
Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21287, USA
2
Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD 21287, USA
3
Division of Computational Health Sciences, Department of Surgery, University of Minnesota, Minneapolis, MN 55455, USA
4
Department of Biomedical Engineering, University of Virginia, Charlottesville, VA 22904, USA
5
Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, AZ 85054, USA
*
Author to whom correspondence should be addressed.
Bioengineering 2025, 12(12), 1297; https://doi.org/10.3390/bioengineering12121297
Submission received: 24 October 2025 / Revised: 21 November 2025 / Accepted: 24 November 2025 / Published: 25 November 2025

Abstract

With the rapid development of artificial intelligence (AI), various deep learning (DL) methods have been introduced into radiation oncology. Among them, the generation of synthetic Computed Tomography (sCT) images has attracted increasing attention, as it supports different clinical scenarios, from image-guided adaptive radiotherapy (IGART) to the simulation-free workflow. This review provides a comprehensive overview of recent studies on DL-based sCT synthesis in radiotherapy from multiple imaging modalities, including Cone-Beam CT (CBCT), Magnetic Resonance Imaging (MRI), and diagnostic CT, and discusses their clinical applications in CBCT-based online adaptive radiotherapy, MRI-guided radiotherapy, and simulation-free workflows. We also examine the architectures of representative DL models such as convolutional neural networks (CNNs) and generative adversarial networks (GANs) and summarize emerging training strategies. Finally, we discuss current challenges of clinical translation of DL algorithms into clinical practice and suggest potential directions for future research. Overall, this paper highlights the potential of AI-driven sCT generation to advance treatment planning by reducing imaging burden, improving dose accuracy, and accelerating workflow efficiency, thus ultimately improving the treatment outcome of patient care.

Graphical Abstract

1. Introduction

Medical image translation has emerged as a rapidly growing field within radiation oncology. Specifically, this denotes the transformation of images from one modality to another [1,2]. In the context of radiation therapy, Computed Tomography (CT) serves as the primary imaging modality, as it provides reliable electron density information essential for accurate dose calculation and treatment plan adaptation, i.e., replanning [3]. Nevertheless, CT imaging presents several limitations, including patient exposure to ionizing radiation and increased complexity and treatment delay in clinical workflows [4]. To mitigate these challenges, researchers have developed methodologies to generate synthetic CT (sCT) images from alternative modalities, such as Magnetic Resonance Imaging (MRI) and Cone-Beam CT (CBCT).
The advent of MRI-guided radiotherapy, facilitated by the development of MRI-Linac systems, has gained increasing attention [5,6,7,8]. These systems enable online adaptive treatment and provide real-time imaging throughout radiation delivery. A significant advantage of MRI lies in its superior soft tissue contrast, which enhances the precision of tumor localization and organ-at-risk (OAR) delineation [9,10]. However, as MRI inherently lacks electron density information critical to dose calculation, treatment planning and plan adaptation, conventional workflows often necessitate an additional planning CT scan. This requirement introduces several disadvantages, including potential image registration errors and additional patient exposure to radiation. MRI-only workflows have been proposed to address these issues by directly generating sCT images from MRI data. This innovation obviates the need for a separate CT scan, reduces registration uncertainty, and streamlines the clinical workflow [11,12,13].
Additionally, similar techniques have been applied to enhance the quality of CBCT, a 3D X-ray imaging technique commonly employed in image-guided adaptive radiotherapy (IGART) for both photon and proton modalities. However, CBCT images are affected by artifacts resulting from scatter noise and truncated projections, which restrict their utility for online plan adaptation [3,14]. Converting CBCT to sCT facilitates accurate dose computation and improves image quality.
Early sCT methods relied on deformable image registration or rule-based mapping. These approaches required careful tuning and were sensitive to variations in input. They also struggled with modality-specific artifacts [9]. Deep learning (DL) has changed this landscape. With the rise of convolutional neural networks (CNNs), generative adversarial networks (GANs), Transformers, and Diffusion models, sCT synthesis has become faster, more accurate, and less dependent on manual intervention.
Given the rapid development of this field, we aim to provide a comprehensive review of deep learning–based sCT generation. We begin by summarizing model architectures and training strategies and then explore clinical applications across different radiotherapy scenarios. Finally, we discuss current challenges and suggest future research directions.
Although several review papers have previously summarized developments in deep learning-based sCT generation [1,3,9,14,15,16]. While these works provide valuable overviews, they are often limited to either CBCT or MRI synthesis or focus primarily on technical architecture. In contrast, this review aims to provide a unified and up-to-date perspective across three major radiotherapy scenarios (CBCT-based online adaptive radiotherapy, MRI-guided radiotherapy, and simulation-free workflow), with an emphasis on model design, training strategies, and publicly available resources. Our goal is to bridge the gap between algorithm development and clinical translation, highlight open challenges and future directions for this rapidly evolving field, and provide a practical guideline for researchers seeking reproducible and open-source tools in this domain.

2. Public Dataset and Data Preprocessing

Before exploring specific model architectures, we first provide an overview of commonly used datasets and preprocessing techniques, which build the foundation for deep learning-based sCT synthesis.

2.1. Public Dataset

Publicly available datasets play a vital role in benchmarking sCT generation methods by enabling reproducibility, cross-study comparison, and model generalization across clinical scenarios. Below, we summarize several widely used datasets.
SynthRAD2023 dataset [17] provides paired MRI-CT and CBCT-CT images for brain and pelvic regions, collected from 540 patients in each anatomical site across three Dutch medical centers. All patients received external beam radiotherapy using photon or proton beam therapy. The extracted DICOM files were first converted to compressed NIFTI format and anonymized. To obtain uniform voxel spacing, images were resampled to 1 × 1 × 1 mm3 for the brain and 1 × 1 × 2.5 mm3 for the pelvis. Rigid registration between CT and MRI/CBCT was performed using Elastix [18] to address inter-modality misalignment. Binary masks of patient outlines were generated using thresholding and provided to standardize the field of view and support evaluation of synthetic CTs.
Building upon SynthRAD2023, SynthRAD2025 [19] significantly expands dataset scale and anatomical diversity. It includes 2362 cases, comprising 890 MRI–CT pairs and 1472 CBCT–CT pairs from five European university centers. The dataset spans head-and-neck, thoracic, and abdominal regions, with patients treated with external beam radiotherapy. Preprocessing followed the SynthRAD2023 protocol, with additional steps including defacing and deformable registration. However, deformable CTs are not provided for the training dataset to avoid biasing model development. Together, SynthRAD2023 and SynthRAD2025 datasets serve as large-scale, multi-institutional benchmarks for CBCT-to-CT and MRI-to-CT synthesis.
The Gold Atlas [20] contains MRI (T1- and T2-weighted) and CT images collected from 19 male patients across three Swedish radiotherapy departments. Patients with prostate or rectal cancer treated with curative radiotherapy were included. Nine pelvic structures were independently delineated by five experts, with consensus labels generated. An automated method (STAPLE [21]) was also used to produce probabilistic segmentation maps. This dataset is widely used for MRI-to-CT synthesis and downstream segmentation tasks.
Pelvic Reference Data [22] includes 58 pelvic CBCT-CT pairs with expert-annotated anatomical landmarks. These landmarks were used to derive reference rigid and affine transformations, which serve as ground truths for registration benchmarking. While primarily intended for registration, the dataset is also suitable for CBCT-to-CT synthesis studies.
Pancreatic-CT-CBCT-SEG dataset [23] comprises 40 CT and CBCT pairs from patients with locally advanced pancreatic cancer receiving ablative radiotherapy in deep-inspiration breath-hold mode. Each patient has one planning CT and two CBCT scans at the time of treatment. Rigid registration was applied to align CBCTs to CT, followed by voxel-wise resampling. Both raw and resampled CBCTs are included [24]. The dataset is notable for the artifacts from variability in the breath-hold levels present in CBCT, making it a challenging benchmark for sCT generation.
4D Lung dataset [25] consists of 4D fan-beam CT (FBCT) and 7 weekly 4D CBCT scans collected from 20 locally advanced non-small cell lung cancer patients receiving chemoradiotherapy. The 4D images consisted of a 3D image set for each of 10 respiratory phases. Target volumes and OARs were delineated by an expert radiation oncologist on all 4D-FBCT scans and selected 4D-CBCT phases [26], enabling studies in synthetic 4D.
All datasets reviewed in this study are publicly available, and we have included their official links and last access dates in the references to ensure transparency and reproducibility. While additional institutional datasets may exist in the literature, our focus is on open-source datasets that support community benchmarking and cross-study comparison.

2.2. Preprocessing Techniques

Preprocessing is important in generating high-quality sCT images by addressing limitations in data acquisition and enhancing the performance and stability of deep learning models. Common preprocessing techniques can be broadly categorized into spatial alignment, intensity standardization, and shape uniformity.
  • Spatial alignment: Rigid and deformable registration are frequently applied to align multimodal images into common anatomical space. Rigid registration is often sufficient for rigid structures such as brain, while deformable registration is preferred in regions with higher anatomical variability such as pelvis. These methods mitigate inter-modality misalignment and ensure anatomical correspondence across modalities [14,15];
  • Intensity standardization: To account for scanner-related intensity variations, intensity normalization is commonly employed at the population or patient level, either through linear scaling or z-score standardization using dataset-specific means and standard deviations. Intensity clipping can remove extreme outliers and suppress noise artifacts, improving data homogeneity [27]. Some studies also apply histogram matching to align intensity distributions across scans [15];
  • Shape uniformity: Resampling is used to standardize voxel spacing across datasets, while resizing ensures a consistent input shape compatible with the model architecture. These operations are particularly important when combining multi-center or multi-modal data with heterogeneous acquisition protocols [27,28];
  • Others: For MRI-based synthesis tasks, techniques such as N4 or N3 bias field correction are applied to reduce low-frequency intensity inhomogeneities and improve soft tissue contrast. In addition, cropping and geometry correction may be applied to remove unnecessary background or correct for distortions, particularly in MRI [15].
Collectively, these preprocessing steps enhance model robustness, improve generalizability across patient populations and imaging protocols, and enable more reliable benchmarking across studies. When combined with appropriate data augmentation strategies, preprocessing serves as a foundational component of effective sCT model development. Comprehensive preprocessing pipelines have been described in several studies [3,14,29] and the SynthRAD2023 and SynthRAD2025 datasets also provide publicly available code implementations to facilitate standardized preprocessing [17,19].

3. Deep Learning Models

To generate sCT images from different modalities, such as CBCT or MRI, various deep learning models have been explored. These models can be categorized into four main groups: CNNs, GANs, transformer-based architectures, and Diffusion models. Each modeling approach offers unique advantages in addressing the challenges of artifact removal and intensity fidelity, which are core requirements for the clinical use of sCT.

3.1. Convolutional Neural Networks (CNNs)

CNNs have been applied in many sCT generation frameworks across both CBCT- and MRI-based modalities [30,31,32,33,34,35,36]. Among them, U-Net [37] and its variants have been the most widely adopted architectures. As depicted in Figure 1, due to the encoder–decoder structure and the inclusion of skip connections, U-Net models enable the preservation of spatial resolution and contextual information of the input modality throughout the network [38]. The nnU-Net (no-new U-Net) [39], a widely recognized framework for medical image segmentation, has also been adapted for MRI-to-CT synthesis [40]. To further enhance structural consistency, several modifications to the standard U-Net have been proposed. For instance, residual U-Net integrates residual blocks to mitigate vanishing gradients and facilitate deeper architectures [41]. Attention U-Net incorporates a self-attention scheme along the skip pathways to learn important features [42]. While CNNs are effective at learning local features, they may struggle to capture relationships between far-apart regions, which has led to interest in architectures such as transformers [14].

3.2. Generative Adversarial Networks (GANs)

GANs have been explored for medical image synthesis due to their ability to generate perceptually realistic images [43,44,45,46,47]. A standard GAN consists of a generator and a discriminator trained in an adversarial framework, where the generator learns to synthesize images that mimic real data while the discriminator aims to distinguish between real and generated images [48], shown in Figure 2. This adversarial process promotes the generation of high-fidelity outputs [49]. In medical imaging, conditional GANs (cGANs) have become particularly common, enabling modality-to-modality translation by incorporating auxiliary information such as anatomical labels or imaging conditions [50,51,52]. A widely adopted variant, CycleGAN, introduces forward and backward mappings between domains, reinforced by a cycle-consistency loss enforced via two generators and two discriminators [53]. This mechanism alleviates the dependence on deformable registration and enhances robustness to anatomical mismatches [49].
To further improve image quality and anatomical preservation, attention mechanisms have been incorporated into GAN architectures. For example, attention-gated CycleGANs have been applied to correct motion artifacts in CBCT images [46], and attention-guided GANs have been proposed to emphasize clinically relevant features [54]. Multiple studies have benchmarked and extended GAN-based models for sCT generation. For instance, cGANs have been employed on multicenter pelvic datasets [55] while different generator architectures including DenseNet, U-Net, and EmbeddedNet were compared in [56], which demonstrated that ensemble learning achieved superior cross-domain generalization. Additional innovations include integrating histogram matching into CycleGAN for improved CBCT correction [57], spatial self-attention for structure-aware synthesis [58], and contrastive learning strategies tailored for 4D synthetic CT generation [59]. Despite these advances, GAN-based models often suffer from training instability, necessitating careful design and evaluation, especially in high-stakes applications like radiotherapy planning.

3.3. Transformer-Based Network

Transformer-based architectures, originally developed for natural language processing tasks, leverage self-attention mechanisms to capture long-range dependencies in sequential data [60]. Their ability to model complex structures has led to rapid adoption in vision tasks [61], including medical image synthesis. In the context of sCT generation, Transformer models have demonstrated strong potential in capturing global anatomical context and preserving structural integrity [62,63]. For instance, TransCBCT [62] employs a Transformer backbone for CBCT-to-sCT synthesis and was shown to outperform CycleGAN in both image quality and dosimetric accuracy. A residual visual Transformer that improves synthetic CT reconstruction by incorporating residual learning into the attention layers was introduced in [64]. To address region-specific synthesis challenges, a high-frequency information-guided synthesis model was proposed, which is a Transformer-based framework that effectively synthesizes multi-region pseudo-CTs from diverse MR sequences by emphasizing high-frequency anatomical features [65].

3.4. Diffusion Models

Another state-of-the-art network architecture for medical image synthesis is Diffusion models, which iteratively generate images by reversing a forward noise process, presented in Figure 3. These models transform random noise into structured data through a series of denoising steps and have shown strong generative capabilities in medical image translation tasks [2]. Diffusion models have been increasingly applied to improve image fidelity and anatomical consistency across modalities such as CBCT and MRI [66,67,68,69].
Several studies highlight the effectiveness of Diffusion models in surpassing traditional architectures like GANs. For example, it was demonstrated that a Diffusion-based CBCT-to-CT model outperformed GANs in lung imaging [66]. Additionally, an energy-guided Diffusion framework was proposed to enhance CBCT quality in unpaired settings, specifically tailored to meet the demands of adaptive radiotherapy [70]. To further enhance the flexibility and quality of synthesis, Diffusion Schrödinger bridge models were used to replace the standard Gaussian distribution with a learned prior distribution, improving both generation quality and efficiency [71]. On the MRI-to-CT front, a boundary-guided adversarial Diffusion model was designed to leverage unpaired data effectively [72].

3.5. Hybrid Models

To further exploit the strengths of multiple architectures, hybrid models have emerged as a promising direction in synthetic CT generation, combining the advantages of CNNs, transformers, adversarial training, and Diffusion mechanisms. These models aim to improve both global anatomical consistency and local structural fidelity. For instance, a generative-transformer adversarial-CNN framework integrates Transformer-based global modeling with adversarial and convolutional local refinements, achieving high-quality sCT synthesis even in low-dose CBCT scenarios [73]. Other hybrid designs have incorporated Transformer-Diffusion synergy. Hu et al. introduced a U-Net-based Diffusion model enhanced with Vision Transformer blocks to refine CBCT-to-CT translation [74]. Similarly, Swin-VNet (a Transformer variant) was used to guide MRI-to-CT Diffusion synthesis [75], while Viar-Hernandez et al. utilized SwinUNet in a Diffusion framework for CBCT-to-CT translation [76]. Another innovative example is the Global-Local Feature and Contrast learning (GLFC) framework by [77], which incorporates Mamba modules, designed for efficient long-sequence modeling, into a U-Net backbone to capture both global and local features. This method achieved state-of-the-art performance with improved Hounsfield Unit (HU) fidelity and structural similarity. In the GAN domain, Transformer modules have also been embedded into adversarial architectures. Hu et al. enhanced the CycleGAN model with Vision Transformer layers [78], while Rusanov et al. developed a Transformer-CycleGAN model that fuses cycle-consistency with attention-based global context understanding [79].
These hybrid approaches reflect a broader trend in medical image synthesis: rather than relying solely on a single model family, combining architectural paradigms can unlock new performance levels in terms of realism, structure preservation, and clinical applicability.

4. Training Strategies

To enable robust and generalizable sCT generation, various training strategies have been developed. These include various representations of input data, diverse supervision schemes, as well as learning paradigms.

4.1. Representation of Imaging Data

The dimension of input data in sCT model development is often categorized as 2D, 2.5D, or 3D, with each underlying a trade-off between computational efficiency and spatial contextual representation. Among them, 2D models have been the most widely employed due to their low computational requirements and ease of implementation, as they process each slice independently [2,15]. However, this absence of inter-slice context can lead to anatomical discontinuities across neighboring slices. In contrast, 3D models leverage volumetric inputs to learn spatial continuity and anatomical coherence across the full volume [75,76,80,81,82,83]. This also places considerable demands on computational resources and dataset size. To balance performance and effectiveness, 2.5D models have been introduced. These typically incorporate multiple consecutive slices as contextual input to 2D networks, thus enhancing spatial consistency without the full cost of 3D modeling [36,47]. In particular, Kondo et al. showed that incorporating neighboring slices into a 2D CNN improved spatial consistency in sCT generation [36].
Comparative evaluations of these dimensional strategies have reported mixed results. Neppl et al. observed a slight advantage of 2D models compared to 3D models, while both approaches achieved comparable dosimetric accuracy [84]. Findings from the SynthRAD2023 challenge further emphasize that optimal dimensionality may be dependent on imaging modality. Specifically, 2D models outperformed both 2.5D and patch-based 3D architectures for MRI-to-CT synthesis, while 3D models exhibited superior results in CBCT-to-CT synthesis across both pelvic and brain datasets [27]. These results contrast with earlier studies reporting that 2.5D or 3D models can outperform 2D counterparts in MRI-to-CT tasks [85,86,87]. This suggests that the effectiveness of dimensionality strategies may vary by imaging modality and anatomical site.
Anatomical view design has emerged as another important approach to improve structural consistency in sCT synthesis. Instead of depending exclusively on input dimensionality, several works have proposed training separate models for different anatomical planes (axial, coronal, and sagittal) and averaging their predictions to improve robustness. Spadea et al. pioneered this approach by averaging predictions from three independent CNNs [86]. Saint-Esteven et al. extended this idea using residual vision transformers [64]. Further, Yoganathan et al. trained a single-view axial model as well as a multiplanar model, which adopted a similar network as [86] and reported no statistically significant difference between these two models in dose prediction accuracy [42].

4.2. Supervision Paradigms

Most of the deep learning models are developed using pixelwise supervision and trained on aligned image pairs (MRI-CT or CBCT-CT). Pair training facilitates stable optimization with intensity-based losses, leading to superior HU accuracy and anatomical conservation [14,88]. Nevertheless, acquiring high-quality paired datasets presents several challenges. Both modalities are acquired with a significant time gap and repetitive usage of ionizing radiation, which is not acceptable in a vulnerable population like pregnant women. In addition, variations in the timing of acquisition might cause spatial discrepancy due to tumor progression. While deformable image registration can perform image warping, it can also introduce additional anatomical distortion or artifacts [89,90,91].
To address these limitations, unpaired training methods have been attracting more attention, as they could avoid complicated data preprocessing and are more adaptable to different kinds of institutions. CycleGANs are a popular choice since they are capable to learn the mapping between two modalities by imposing cycle-consistency constraints [43,44,92]. However, unpaired learning may suffer from model performance, such as compromised HU accuracy and poor anatomical structure preservation. To address these shortcomings, recent methods have attempted to incorporate anatomical priors into unpaired learning frameworks. For instance, a path- and bone-contour regularized training strategy was proposed, which can learn domain mappings in a shared latent space using neural Ordinary Differential Equations (ODEs) [91]. Likewise, Gong et al. developed a boundary information-guided adversarial Diffusion model, outperforming standard CycleGANs on pelvic MRI datasets [72]. These approaches improve structural accuracy in the absence of voxel-level supervision and have shown competitive results compared to traditional paired setups.
Depending on the paired datasets, most sCT methods adopt supervised learning to train the models. Due to the limitation of aligned datasets, unsupervised learning has been introduced to generate CT without ground truth labels. Early approaches utilized adversarial networks with additional structural constraints, such as CycleGAN proposed by [93], which outperformed traditional CycleGAN. Building on this, recent works leverage the flexibility of Diffusion models and representation learning. For example, Peng et al. developed a patient-specific Diffusion model utilizing a score-based method, generating sCT with reduced artifacts and high HU values [94]. Similarly, Zhang et al. introduced a disentanglement learning framework that shares the information of image pairs in the latent space [95]. Other innovations focus on edge and structural fidelity. Zhu et al. proposed an edge-aware unsupervised GAN to enhance boundary delineation and recover missing anatomical structures [96]. Moreover, Szmul et al. introduced an unsupervised framework capable of simultaneously synthesizing sCT and segmenting OARs, without relying on CBCT segmentations during either training or inference [97].

4.3. Learning Paradigms

Loss functions are essential to optimizing deep learning models, and many have been used in sCT synthesis. Traditionally, L1 and L2 losses, which are the average of the absolute difference and the squared error between the predicted and ground truth values, respectively, are often utilized in U-Net-, Transformer-, and Diffusion-based models to guarantee per-pixel correctness. Another commonly used metric is the structural similarity index (SSIM) loss that enforces structural consistency. Several studies include L1, L2 and SSIM losses in weighted sums with hyperparameters that are tunable to balance between pixel-level accuracy and perceptual similarity [1,14,98]. In GAN-based models, adversarial loss is the main factor for realistic improvement, which trains the generator to output images that are indistinguishable from real ones. Hybrid loss functions are proposed to further enhance the image quality [14,77,92,93,99,100,101]. These typically use pixel-wise losses (L1/L2) with region-of-interest (ROI)-oriented losses [77,99,100] or include perceptual losses based on pre-trained networks (e.g., VGG) to promote high-level structural and textural fidelity [92].
Another popular method for training is guided training, which is usually guided by either image frequency characteristics or anatomical structure. Frequency-guided learning has been widely used in recent works [65,68,102,103,104,105] which mostly employed Diffusion architectures. For example, Li et al. proposed a frequency-aware Diffusion model, which designs low- and high-pass filters to reconstruct information of intermediate frequency image content [68]. Building on this, Zhang et al. incorporated a high-frequency optimization module based on wavelet transform to improve textural details [103], while Luo et al. proposed a high-frequency smoothness constraint to maintain edge sharpness and fine structures [105]. In contrast, anatomy-guided models are also proposed to maintain the structural fidelity during the synthesis stage [72,93,99,106,107,108,109]. Yang et al. designed a structure-constrained GAN for unsupervised MRI-to-CT translation which incorporated a structure-consistency loss [93]. A dual-stream structure-aware GAN was proposed in [106] to capture the structural information within the output images. Additionally, bone region fidelity was highlighted by leveraging a multi-task network focused on bone density restoration by Kaushik et al. [99]. Transformer-based models have also introduced structural attention mechanisms to enhance the anatomical realism [109], and Yu et al. developed a multi-level, hierarchical discriminator to enhance the fidelity of synthesized MR images [108].
To mitigate data privacy concerns and promote multicenter research, federated learning has been integrated in the context of synthetic CT generation [58,110,111]. One of the earliest works is proposed by [110], which utilized a cross-silo horizontal FL framework for MRI-to-CT synthesis, allowing several institutions to collaboratively train the U-Net-based architecture without sharing any raw data. Based on this model, researchers further developed the method for CBCT-to-CT synthesis, focusing on privacy protection and multi-institutional collaboration with only decentralized training (i.e., without sharing raw patient data or creating different site-specific models) [111]. Although the FL framework shows great potential, its performance is constrained by limited data diversity, with the most prominent errors occurring in anatomical regions that are underrepresented in the training dataset.
The choice of training strategy plays a pivotal role in determining the robustness, generalizability, and clinical utility of sCT models. Depending on the anatomical region, imaging modality, and data availability, different training paradigms can significantly impact performance in downstream applications. In the next section, we review how these models are deployed across various clinical scenarios.

5. Application in Radiotherapy

CT synthesis serves as a cornerstone for enabling IGART. Depending on the imaging modality and clinical setting, sCT generation facilitates accurate dose calculations, motion management, and online adaptation, thereby enhancing both treatment precision and workflow efficiency. In this section, we categorize recent advances into three application domains: CBCT-based online adaptive radiotherapy, MRI-guided radiotherapy, and simulation-free workflow. To facilitate reproducibility, we summarize in Table 1 a selection of deep learning models for sCT generation that are publicly available. Only models with accessible code or pretrained weights were included.

5.1. CBCT-Based Online Adaptive Radiotherapy

The CBCT-guided ART offers daily treatment adaptation using pre-treatment imaging, which increases the target conformality and the OAR sparing [14,49]. However, the use of CBCT is restricted in the clinic due to inherent limitations, such as image artifacts, low soft tissue contrast and imprecise HU, which can be critical in dose-sensitive environments, such as those used proton therapy [34,112,113]. To solve these problems, deep learning–based CBCT-to-CT translation is a promising approach to provide dose-computable images from the CBCT without acquiring an additional CT and the additional radiation exposure [114,115,116].
An increasing number of works have investigated sCT generation in other body sites such as brain [74,117], head and neck [50,89,118], thorax [35,59,66], breast [114], pelvis [47,119], spine [120], nasopharynx [32,52].Among them, the thorax suffers from specific challenges related to respiratory motion, which has led to the development of 4D CBCT-to-sCT translation methods [59,121]. In addition, investigators have developed other methods such as low-dose CBCT-based sCT generation [116,122], and dual-energy CBCT-based synthesis [76,123,124] to improve soft-tissue representation and HU uniformity.
One of the most important goals is to reduce artifacts in CBCT-based imaging synthesis. Image degradation arising from scatter, cone-beam geometry, and motion can be addressed through deep learning approaches in either the projection or image domain [125,126]. Dual-stage networks that explicitly decouple artifact removal from sCT generation have also proven effective, particularly in spine and abdominal targets [120,127]. Cycle-consistent adversarial networks such as CycleGAN [112,114] and cGANs [52] facilitate domain translation without paired CT-CBCT data. In particular, hybrid pipelines, such as ARTInp [128] and GLFC [77] incorporate additional domain knowledge or attention mechanisms to remove noise and improve context-aware generation.
The accuracy of sCT images for dose calculation has been verified in quantitative research. Several studies report that dose-volume histogram (DVH) parameters calculated from sCTs match well with those of a reference CT to support clinical application, supporting their clinical feasibility [113,129]. Additionally, previous retrospective investigations have also demonstrated the ability of sCTs to trigger plan adaptation in proton therapy workflows, which can potentially lead to fewer repeat CTs being performed [129].

5.2. MRI-Guided Radiotherapy

MRI-guided radiotherapy provides superior soft tissue contrast without exposing patients to ionizing irradiation, thus making it a compelling imaging modality for treatment planning. However, its lack of direct association with electron density or HU prevents its application in accurate dose calculations [12,43].To address this issue, deep learning-based MRI-to-CT synthesis methods have been developed to produce sCTs from MRI [30,31]. These methods have been investigated in various anatomical sites, such as the brain [1,36,84], head-and-neck [33,64], pelvis [55,80,130], and thorax areas [51].
There is a growing interest in low-field MRI-based sCT due to its affordability and seamless integration with hybrid systems such as MRI-Linacs. While low-field MRI has an inherently lower signal-to-noise ratio than high-field scanners, low-field MRI provides a realistic alternative for real-time on-board imaging in IGART workflows. Recent studies have demonstrated the possibility of generating high-quality sCTs from low-field MR inputs using conditional GANs and residual transformer architectures [45,51,64]. These models have guaranteed anatomical accuracy as well as clinically acceptable dosimetric performance, which establishes low-field MRI as an alternate base for MRI-only planning pipelines.
Another challenge in MRI-to-CT synthesis is the heterogeneity of MR sequences used in clinical protocols. Commonly employed sequences (e.g., T1-weighted, T2-weighted, Dixon-imaging) differ considerably in tissue contrast, spatial resolution, and acquisition parameters. To address this, several studies have proposed various methods that generated sCTs from different MR sequences in a single model. For instance, Zimmermann et al. developed a sCT generator independent of MRI sequence utilizing 3D UNet architecture, trained on T1, T2, and contrast-enhanced T1-weighted images [82]. Zhong et al. designed MTT-Net that utilized multi-scale tokens-aware Transformer network suitable for various anatomical regions [131], while Zhao et al. proposed a high-frequency-information guided network to generate sCTs from different MR sequences [65].
As MRI-to-CT-based models are approaching clinical translation, the generalizability of these models across institutions, scanners and patient populations becomes more important. However, the collection of such large, varied datasets is typically made difficult by privacy laws and data-sharing policies. In response to this, recent research investigated federated learning for training the models jointly, without sharing raw patient data. For example, RadiaSync [58] showed that decentralized training over multiple sites can match the performance of centralized models. Also, at the multicenter level, models based on sCT have demonstrated to maintain high dosimetric accuracy and fidelity even when they are trained on heterogeneous datasets crowded by different sources of acquiring information [130,132,133].

5.3. Simulation-Free Workflow

The clinical motivation for simulation-free workflows especially lies in fast-progressing diseases such as non-small cell lung cancer (NSCLC), where radiotherapy plays an increasingly pivotal role in local tumor control and survival outcomes and the current delay in time-to-treatment initiation (TTI) in lung cancer radiotherapy has significantly compromised patient outcomes [134,135,136,137,138]. Studies have shown that each four-week delay in radiotherapy is associated with a 6–8% increase in mortality [139] and a 13% chance of disease upstaging due to new lymph node involvement or metastasis [140]. Despite these risks, conventional workflows often involve over four weeks of delay between diagnosis and treatment initiation due to sequential simulation CT acquisition, manual contouring, and plan generation [141,142,143]. Simulation-free strategies aim to bypass these bottlenecks by synthesizing planning-quality CTs from existing diagnostic imaging, thereby enabling faster and more streamlined treatment planning. In time-critical and resource-constrained settings, such workflows may significantly eliminate imaging redundancy, reduce radiation exposure, and improve patient access to timely care [144,145].
Recent deep learning techniques have expanded the territory of simulation-free workflow by predicting deformation vector fields (DVFs) that map diagnostic images into the planning geometry. Instead of generating sCTs directly, these models can produce anatomically aligned transformations that can map diagnostic CTs (dCTs) to dose-calibrated pCT-like representations [146,147,148]. For example, deepPERFECT learns DVFs from diagnostic-to-planning CT pairs to produce anatomically corrected sCTs for dose calculation [146]. Based on this, DAART further develops the concept to a full adaptive radiotherapy model, allowing for reducing the current median 4 weeks workflow to 2 weeks from diagnosis to treatment initiation [147]. Similarly, Zhu et al. applied a DVF prediction model to lattice radiotherapy showing high gamma pass rates and dosimetric consistency in vitro in complex abdominal plans [148].
Despite promising results, simulation-free workflows need to be able to account for anatomical differences, changes in patient positioning, and motion induced artifacts, especially in abdominal and thoracic regions. In addition, proper HU calibration is still of critical importance for proton therapy, as well as for highly conformal photon plans.
Table 1. Publicly available deep learning models for synthetic CT generation in radiotherapy.
Table 1. Publicly available deep learning models for synthetic CT generation in radiotherapy.
ApplicationPaperDataset PublicityModelCode Link
CBCT to CT[83]Pancreatic-CT-CBCT-SEG; SynthRAD20233D Unethttps://github.com/MaxTschuchnig/EnhancingSyntheticCTfromCBCTviaMultimodalFusionandEnd-To-EndRegistration (accessed on 20 October 2025)
[149]privateCycleGAN,
StarGAN
https://github.com/Paritt/sCT-via-StarGAN-and-CycleGAN (accessed on 20 October 2025)
[77]SynthRAD2023Mamba-enhanced UNethttps://github.com/HiLab-git/GLFC (accessed on 20 October 2025)
[150]privatePhysics-based networkhttps://github.com/Pangyk/SinoSynth (accessed on 20 October 2025)
[67]privateDiffusionhttps://github.com/junbopeng/conditional_DDPM * (accessed on 20 October 2025)
[68]private;
Organs at Risk dataset [151,152]
Diffusionhttps://github.com/Kent0n-Li/FGDM (accessed on 20 October 2025)
MRI to CT[91]Gold Atlas; SynthRAD2023Neural ODE-basedhttps://github.com/kennysyp/PaBoT (accessed on 20 October 2025)
[40]SynthRAD2023nnU-Nethttps://github.com/Phyrise/nnUNet_translation (accessed on 20 October 2025)
[149]privateCycleGAN,
StarGAN
https://github.com/Paritt/sCT-via-StarGAN-and-CycleGAN (accessed on 20 October 2025)
[131]privateTransformerhttps://github.com/SMU-MedicalVision/MTT-Net (accessed on 20 October 2025)
[75]privateDiffusionhttps://github.com/shaoyanpan/Synthetic-CT-generation-from-MRI-using-3D-transformer-based-denoising-diffusion-model * (accessed on 20 October 2025)
[69]Gold AtlasDiffusionhttps://github.com/QingLyu0828/diffusion_mri_to_ct_conversion (accessed on 20 October 2025)
* available on GitHub (https://github.com/, accessed on 20 October 2025) but not officially in the paper.

6. Evaluation Metrics

The performance of sCT generation is commonly assessed using three categories of metrics: intensity-based similarity, geometric fidelity, and dosimetry-based evaluation. Each of these captures complementary aspects of image quality and clinical usability, which are listed in Table 2.

6.1. Intensity-Based Metrics

To evaluate the voxel-wise intensity similarity between sCTs and reference CTs, the most used similarity metrics are Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR), and SSIM. Apart from these, Mean Error (ME) and Root Mean Square Error (RMSE) are commonly used to evaluate the similarity. These metrics directly evaluate differences in HU, with lower values indicating higher fidelity. Normalized Cross Correlation (NCC) is also reported in some literature to account for distributional differences in intensity.

6.2. Geometric-Based Metrics

Beyond voxel-level similarity, geometric fidelity is evaluated by comparing delineated anatomical structures between sCTs and reference CTs. The Dice Similarity Coefficient (DSC) quantifies volumetric overlap, while surface-based metrics, such as the Hausdorff Distance (HD) and Mean Absolute Surface Distance (MASD), measure boundary agreement. These metrics are especially relevant when sCTs are used for downstream tasks such as segmentation, where structural accuracy of OARs and target volumes is critical.

6.3. Dosimetry-Based Metrics

Since the goal of sCT generation is accurate radiotherapy planning, dose recalculation provides the most clinically relevant evaluation.
A commonly used metric is the dose difference (DD), which evaluates the difference between sCT-based and CT-based dose distributions. DD is typically computed either voxel-wise or as an average within specific ROIs. It can be reported as an absolute value (in Gy) or relative to a reference (in %), such as the prescribed or maximum dose. Some studies also report the dose pass rate, defined as the percentage of voxels where DD is below a specified threshold (e.g., 2% or 3%) [153].
Another widely used evaluation method is the dose–volume histogram (DVH), which plots the percentage of volume receiving at least a given dose. From the DVH, clinically relevant endpoints such as Dmax (maximum dose), D95% (dose covering 95% of the target volume), and V20 Gy (volume percentage receiving ≥20 Gy) can be extracted and compared between sCT- and CT-based plans. These specific metrics are widely adopted in radiotherapy dose evaluation guidelines such as ICRU Report 83 [154] and QUANTEC (Quantitative Analyses of Normal Tissue Effects in the Clinic) [155], and have been used extensively in clinical sCT validation studies. Comparing DVH endpoints between sCT- and CT-based plans offers a practical way to assess dosimetric agreement in a clinically interpretable manner.
Gamma analysis is a widely adopted method that combines dose and spatial agreement into a single metric. It evaluates both the dose difference (in %) and the distance-to-agreement (in mm) simultaneously, reporting either the mean gamma index or the gamma pass rate (percentage of voxels with γ < 1). Gamma analysis can be performed in 2D or 3D, but its outcomes are highly dependent on parameters such as dose threshold, grid size, and voxel resolution, which complicates direct comparison across studies [156]. Most DL-based sCT images have reported average gamma pass rates (2%/2 mm) ranging from 92.0% to 99.5%. A gamma pass rate above 95% under these thresholds is typically considered acceptable [1,38].

7. Discussion

Deep learning has been applied for sCT generation in radiotherapy. Applications range from CBCT-based or MR-based workflows and the simulation-free scenario. Similarly, a variety of training strategies have been proposed to improve the performance, ranging from frequency-guided DL models to federated learning. This shows the trend of AI facilitating IGART. However, it is still challenging to apply AI algorithms in real-world clinical settings. In this section, we will discuss the challenges of clinical translation, along with data challenges, issues with current evaluation methods. Future directions such as benchmarking datasets and standardized pipelines will also be discussed.

7.1. Clinical Gap and Generalizability

Although good image fidelity and dose accuracy have been achieved by current DL models, the majority of the model development is performed under a specific narrow condition. They are usually confined to one anatomic location, treatment protocol, or homogeneous patient population. Different imaging scanners and data size also limit the generalizability. While few studies work with CycleGANs across anatomical sites [157,158,159,160], these are still largely constrained to site-specific protocols and cohorts.
Boily et al. recently analyzed a much larger dataset with 4000 patients and investigated model generalizability across age, sex and anatomical regions. They indicated that generalization is difficult and highly context-dependent [161]. Thus, we call for more coherent evaluation protocols designed for radiotherapy applications.
In addition, models such as GANs and Diffusion networks have high training costs while they obtain exceptional performance. Altalib et al. discussed that long runtimes and GPU demands make it hard to deploy these models in practice [14]. Clinical readiness requires a trade-off between performance and efficiency.

7.2. Data Challenges

High-quality paired datasets are essential for supervised sCT model training. However, in practice it is often difficult to obtain paired datasets. Due to the temporal gap between imaging sessions, there will be misalignment between two images. It is also not feasible to collect paired scans in typical clinical workflows [90,162]. These limitations have led to the growing interest in unpaired or even unsupervised approaches. Notable examples include CycleGANs [43], anatomy-regularized adversarial models [72] and latent-space Diffusion networks [94,95]. These models help improve training flexibility and reduce data dependency. However, their ability to preserve fine anatomical details and accurate HU values remains an open question [88].
Furthermore, as discussed in previous papers, data diversity is a long-standing problem that has not been well addressed [2,15]. Certain anatomical regions such as air pockets or dense bone are underrepresented in the training data, and this may lead to prominent errors when predicting sCT images. Federated learning has been suggested as a potential way to solve this issue [58,111,157], as it allows for broader population coverage without requiring data centralization.

7.3. Suitability of Evaluation Metrics

Most studies on sCT generation primarily reported intensity-based metrics such as MAE, PSNR, and SSIM. While these are useful for model benchmarking, they provide limited information on the clinical usage of DL models. For instance, small HU differences near OARs may have less impact on MAE but will heavily affect the dose distribution during treatment planning.
To address such spatially localized HU errors, gamma analysis has been widely adopted in many studies due to its joint evaluation of dose and spatial agreement. However, its clinical interpretability remains controversial due to dependency on arbitrary thresholds and limited correlation with clinical outcomes [156]. Hence, it is meaningful to report clinically interpretable metrics such as DVH deviation and HU accuracy in critical structures for clinical scenarios. However, such metrics are reported inconsistently across studies, and only a few studies related dosimetric errors to specific anatomical structures or model design choices [15]. To enable robust clinical validation, evaluation frameworks should be standardized to report not only intensity-based metrics but also clinically relevant ones. In the absence of a treatment plan, anatomy-based spatial descriptors such as the Overlap Volume Histogram (OVH) can be used as an alternative [163]. OVH captures the geometric relationship between targets and OARs without relying on a treatment plan and has shown potential in estimating achievable dose distributions [164].

7.4. Future Direction

To bridge the gap between experimental performance and clinical practice, future research can focus on the following directions:
  • Open-Source Availability and Community Resources: Reproducibility remains a major challenge in deep learning-based sCT generation. Future research may prioritize the release of open-source codes and models, which would promote transparency and reproducibility. As highlighted in prior studies, the lack of code sharing hinders reproducibility in medical imaging AI and slows clinical translation compared to general computer vision, where open benchmarks and toolkits have driven rapid progress [165,166,167]. While some projects in Table 1 have released resources, efforts remain inconsistent. Community-maintained repositories and standardized pipelines are needed to support broader validation and adoption;
  • Standardized Benchmarks: Large-scale and well-annotated benchmark datasets with consistent evaluation labels need to be established. Such datasets can be collected across multiple centers, imaging vendors, and treatment protocols to ensure fairness and generalizability [165];
  • Multimodal Learning: By incorporating more image modalities, e.g., PET images, low field MRI, and CBCT images with different kV settings, various clinical scenarios can be covered. In this way, robustness of models can be improved and better OAR delineation on sCT can be achieved [77,124];
  • Personalized medicine: To account for the diversity across different populations and institutions, approaches including federated learning can be further explored, which can enable the generation of high-quality, patient-tailored sCT [110,111];
  • End-to-End Clinical Pipelines: As sCT synthesis is only one step in the radiotherapy workflow, it is critical to integrate it with downstream sections such as OAR delineation and treatment planning. A unified and end-to-end pipeline may improve reproducibility and facilitate smoother translation into clinical practice;
  • Vendor Integration and Deployment: It is necessary to collaborate with treatment planning system vendors and hardware manufacturers to enable seamless integration of sCT synthesis into clinical workflows. Several vendor systems have already been proposed and evaluated, such as Syngo_BD (Siemens), MRI Planner (Spectronic), and MR-Box (Therapanacea) [168]. Models must be optimized for runtime efficiency, system compatibility, and real-time inference in clinical settings.

Author Contributions

Conceptualization, Y.G., Y.L., Q.C. and K.D.; methodology, Y.G., Y.L. and X.F.; software, Y.G., R.Z. and Q.C.; validation, Y.G. and K.D.; formal analysis, Y.G., H.H. and K.D.; investigation, Y.G. and K.D.; resources, K.D.; data curation, Y.G. and K.D.; writing—original draft preparation, Y.G. and K.D.; writing—review and editing, all; visualization, Y.G.; supervision, K.D.; project administration, K.D.; funding acquisition, W.N. and K.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Cancer Institute of the National Institutes of Health grant number R25CA288263 and R37CA229417. This work was supported in part by the CaREER (Cancer Research Education Excellence in Radiotherapy) program at Johns Hopkins University.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

Quan Chen and Xue Feng are co-founders of Carina Medical LLC. The other authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CTComputed Tomography
sCTSynthetic Computed Tomography
MRIMagnetic Resonance Imaging
CBCTCone-Beam Computed Tomography
OAROrgan at risk
IGARTImage-guided adaptive radiotherapy
DLDeep learning
CNNConvolutional neural network
GANGenerative adversarial network
FBCTFan-beam Computed Tomography
cGANConditional generative adversarial network
GLFCGlobal-Local Feature and Contrast learning
HUHounsfield unit
DVHDose-volume histogram
pCTPlanning Computed Tomography
DVFDeformation vector field
MAEMean absolute error
PSNRPeak signal to noise ratio
SSIMStructural similarity index

References

  1. Spadea, M.F.; Maspero, M.; Zaffino, P.; Seco, J. Deep Learning Based Synthetic-CT Generation in Radiotherapy and PET: A Review. Med. Phys. 2021, 48, 6537–6566. [Google Scholar] [CrossRef]
  2. Chen, J.; Ye, Z.; Zhang, R.; Li, H.; Fang, B.; Zhang, L.; Wang, W. Medical Image Translation with Deep Learning: Advances, Datasets and Perspectives. Med. Image Anal. 2025, 103, 103605. [Google Scholar] [CrossRef]
  3. Sherwani, M.K.; Gopalakrishnan, S. A Systematic Literature Review: Deep Learning Techniques for Synthetic Medical Image Generation and Their Applications in Radiotherapy. Front. Radiol. 2024, 4, 1385742. [Google Scholar] [CrossRef] [PubMed]
  4. Whitebird, R.R.; Solberg, L.I.; Bergdall, A.R.; López-Solano, N.; Smith-Bindman, R. Barriers to CT Dose Optimization: The Challenge of Organizational Change. Acad. Radiol. 2021, 28, 387–392. [Google Scholar] [CrossRef] [PubMed]
  5. Fu, Y.; Zhang, H.; Morris, E.D.; Glide-Hurst, C.K.; Pai, S.; Traverso, A.; Wee, L.; Hadzic, I.; Lønne, P.-I.; Shen, C.; et al. Artificial Intelligence in Radiation Therapy. IEEE Trans. Radiat. Plasma Med. Sci. 2022, 6, 158–181. [Google Scholar] [CrossRef] [PubMed]
  6. Jaffray, D.A. Image-guided radiotherapy: From current concept to future perspectives. Nat. Rev. Clin. Oncol. 2012, 9, 688–699. [Google Scholar] [CrossRef]
  7. Mori, S.; Mori, Y. Machine Learning–Based Image Processing in Radiotherapy. In Deep Learning for Advanced X-Ray Detection and Imaging Applications; Iniewski, K., Cai, L., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 191–208. ISBN 978-3-031-75653-5. [Google Scholar]
  8. Lagendijk, J.J.W.; Raaymakers, B.W.; van Vulpen, M. The Magnetic Resonance Imaging–Linac System. Semin. Radiat. Oncol. 2014, 24, 207–209. [Google Scholar] [CrossRef]
  9. Bahloul, M.A.; Jabeen, S.; Benoumhani, S.; Alsaleh, H.A.; Belkhatir, Z.; Al-Wabil, A. Advancements in Synthetic CT Generation from MRI: A Review of Techniques, and Trends in Radiation Therapy Planning. J. Appl. Clin. Med. Phys. 2024, 25, e14499. [Google Scholar] [CrossRef]
  10. Chandarana, H.; Wang, H.; Tijssen, R.H.N.; Das, I.J. Emerging Role of MRI in Radiation Therapy. J. Magn. Reson. Imaging 2018, 48, 1468–1478. [Google Scholar] [CrossRef]
  11. Karlsson, M.; Karlsson, M.G.; Nyholm, T.; Amies, C.; Zackrisson, B. Dedicated Magnetic Resonance Imaging in the Radiotherapy Clinic. Int. J. Radiat. Oncol. Biol. Phys. 2009, 74, 644–651. [Google Scholar] [CrossRef]
  12. Kazemifar, S.; McGuire, S.; Timmerman, R.; Wardak, Z.; Nguyen, D.; Park, Y.; Jiang, S.; Owrangi, A. MRI-Only Brain Radiotherapy: Assessing the Dosimetric Accuracy of Synthetic CT Images Generated Using a Deep Learning Approach. Radiother. Oncol. 2019, 136, 56–63. [Google Scholar] [CrossRef]
  13. Owrangi, A.M.; Greer, P.B.; Glide-Hurst, C.K. MRI-Only Treatment Planning: Benefits and Challenges. Phys. Med. Biol. 2018, 63, 05TR01. [Google Scholar] [CrossRef]
  14. Altalib, A.; McGregor, S.; Li, C.; Perelli, A. Synthetic CT Image Generation from CBCT: A Systematic Review. IEEE Trans. Radiat. Plasma Med. Sci. 2025, 9, 691–707. [Google Scholar] [CrossRef]
  15. Acquah, I.K.; Issahaku, S.; Tagoe, S.N.A. A Systematic Review of Deep Learning Techniques for Generating Synthetic CT Images from MRI Data. Pol. J. Med. Phys. Eng. 2025, 31, 20–38. [Google Scholar] [CrossRef]
  16. Boulanger, M.; Nunes, J.-C.; Chourak, H.; Largent, A.; Tahri, S.; Acosta, O.; De Crevoisier, R.; Lafond, C.; Barateau, A. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys. Medica 2021, 89, 265–281. [Google Scholar] [CrossRef] [PubMed]
  17. Thummerer, A.; van der Bijl, E.; Galapon Jr, A.; Verhoeff, J.J.C.; Langendijk, J.A.; Both, S.; van den Berg, C.A.T.; Maspero, M. SynthRAD2023 Grand Challenge Dataset: Generating Synthetic CT for Radiotherapy. Med. Phys. 2023, 50, 4664–4674. [Google Scholar] [CrossRef] [PubMed]
  18. Klein, S.; Staring, M.; Murphy, K.; Viergever, M.A.; Pluim, J. Elastix: A Toolbox for Intensity-Based Medical Image Registration. IEEE Trans. Med. Imaging 2010, 29, 196–205. [Google Scholar] [CrossRef]
  19. Thummerer, A.; van der Bijl, E.; Galapon, A.J.; Kamp, F.; Savenije, M.; Muijs, C.; Aluwini, S.; Steenbakkers, R.J.H.M.; Beuel, S.; Intven, M.P.; et al. SynthRAD2025 Grand Challenge Dataset: Generating Synthetic CTs for Radiotherapy from Head to Abdomen. Med. Phys. 2025, 52, e17981. [Google Scholar] [CrossRef]
  20. Nyholm, T.; Svensson, S.; Andersson, S.; Jonsson, J.; Sohlin, M.; Gustafsson, C.; Kjellén, E.; Söderström, K.; Albertsson, P.; Blomqvist, L.; et al. MR and CT Data with Multiobserver Delineations of Organs in the Pelvic Area—Part of the Gold Atlas Project. Med. Phys. 2018, 45, 1295–1300. [Google Scholar] [CrossRef]
  21. Warfield, S.K.; Zou, K.H.; Wells, W.M. Simultaneous Truth and Performance Level Estimation (STAPLE): An Algorithm for the Validation of Image Segmentation. IEEE Trans. Med. Imaging 2004, 23, 903–921. [Google Scholar] [CrossRef]
  22. Yorke, A.A.; McDonald, G.C.; Solis, D.; Guerrero, T. Pelvic reference data (Version 1) [Data set]. Cancer Imaging Arch. 2019. [Google Scholar] [CrossRef]
  23. Hong, J.; Reyngold, M.; Crane, C.; Cuaron, J.; Hajj, C.; Mann, J.; Zinovoy, M.; Yorke, E.; LoCastro, E.; Apte, A.P.; et al. Breath-hold CT and cone-beam CT images with expert manual organ-at-risk segmentations from radiation treatments of locally advanced pancreatic cancer [Data set]. Cancer Imaging Arch. 2021. [Google Scholar] [CrossRef]
  24. Hong, J.; Reyngold, M.; Crane, C.; Cuaron, J.; Hajj, C.; Mann, J.; Zinovoy, M.; Yorke, E.; LoCastro, E.; Apte, A.P.; et al. CT and Cone-Beam CT of Ablative Radiation Therapy for Pancreatic Cancer with Expert Organ-at-Risk Contours. Sci. Data 2022, 9, 637. [Google Scholar] [CrossRef] [PubMed]
  25. Hugo, G.D.; Weiss, E.; Sleeman, W.C.; Balik, S.; Keall, P.J.; Lu, J.; Williamson, J.F. Data from 4D lung imaging of NSCLC patients (Version 2) [Data set]. Cancer Imaging Arch. 2016. [Google Scholar] [CrossRef]
  26. Balik, S.; Weiss, E.; Jan, N.; Roman, N.; Sleeman, W.C.; Fatyga, M.; Christensen, G.E.; Zhang, C.; Murphy, M.J.; Lu, J.; et al. Evaluation of Four-Dimensional Computed Tomography to Four-Dimensional Cone-Beam Computed Tomography Deformable Image Registration for Lung Cancer Adaptive Radiation Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2013, 86, 372–379. [Google Scholar] [CrossRef]
  27. Huijben, E.M.C. Generating Synthetic Computed Tomography for Radiotherapy: SynthRAD2023 Challenge Report. Med. Image Anal. 2024, 97, 103276. [Google Scholar] [CrossRef]
  28. Villegas, F.; Dal Bello, R.; Alvarez-Andres, E.; Dhont, J.; Janssen, T.; Milan, L.; Robert, C.; Salagean, G.-A.-M.; Tejedor, N.; Trnková, P.; et al. Challenges and Opportunities in the Development and Clinical Implementation of Artificial Intelligence Based Synthetic Computed Tomography for Magnetic Resonance Only Radiotherapy. Radiother. Oncol. 2024, 198, 110387. [Google Scholar] [CrossRef]
  29. Han, S.; Hémon, C.; Texier, B.; Kortli, Y.; Queffelec, A.; De Crevoisier, R.; Dowling, J.; Greer, P.; Bessières, I.; Barateau, A.; et al. Balancing Data Consistency and Diversity: Preprocessing and Online Data Augmentation for Multi-Center Deep Learning-Based MR-to-CT Synthesis. Pattern Recognit. Lett. 2025, 189, 56–63. [Google Scholar] [CrossRef]
  30. Han, X. MR-Based Synthetic CT Generation Using a Deep Convolutional Neural Network Method. Med. Phys. 2017, 44, 1408–1419. [Google Scholar] [CrossRef]
  31. Chen, S.; Qin, A.; Zhou, D.; Yan, D. Technical Note: U-Net-Generated Synthetic CT Images for Magnetic Resonance Imaging-Only Prostate Intensity-Modulated Radiation Therapy Treatment Planning. Med. Phys. 2018, 45, 5659–5665. [Google Scholar] [CrossRef]
  32. Li, Y.; Zhu, J.; Liu, Z.; Teng, J.; Xie, Q.; Zhang, L.; Liu, X.; Shi, J.; Chen, L. A Preliminary Study of Using a Deep Convolution Neural Network to Generate Synthesized CT Images Based on CBCT for Adaptive Radiotherapy of Nasopharyngeal Carcinoma. Phys. Med. Biol. 2019, 64, 145010. [Google Scholar] [CrossRef] [PubMed]
  33. Dinkla, A.M.; Florkow, M.C.; Maspero, M.; Savenije, M.H.F.; Zijlstra, F.; Doornaert, P.A.H.; van Stralen, M.; Philippens, M.E.P.; van den Berg, C.A.T.; Seevinck, P.R. Dosimetric Evaluation of Synthetic CT for Head and Neck Radiotherapy Generated by a Patch-Based Three-Dimensional Convolutional Neural Network. Med. Phys. 2019, 46, 4095–4104. [Google Scholar] [CrossRef] [PubMed]
  34. Chen, L.; Liang, X.; Shen, C.; Jiang, S.; Wang, J. Synthetic CT Generation from CBCT Images via Deep Learning. Med. Phys. 2020, 47, 1115–1125. [Google Scholar] [CrossRef] [PubMed]
  35. Thummerer, A.; Seller Oria, C.; Zaffino, P.; Meijers, A.; Guterres Marmitt, G.; Wijsman, R.; Seco, J.; Langendijk, J.A.; Knopf, A.; Spadea, M.F.; et al. Clinical Suitability of Deep Learning Based Synthetic CTs for Adaptive Proton Therapy of Lung Cancer. Med. Phys. 2021, 48, 7673–7684. [Google Scholar] [CrossRef]
  36. Kondo, S.; Kasai, S.; Hirasawa, K. Synthesizing 3D computed tomography from MRI or CBCT using 2.5D deep neural networks. arXiv 2023, arXiv:2306.13553. [Google Scholar] [CrossRef]
  37. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention (MICCAI); Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef]
  38. Landry, G.; Kurz, C.; Thummerer, A. Perspectives for Using Artificial Intelligence Techniques in Radiation Therapy. Eur. Phys. J. Plus 2024, 139, 883. [Google Scholar] [CrossRef]
  39. Isensee, F.; Jaeger, P.F.; Kohl, S.A.A.; Petersen, J.; Maier-Hein, K.H. nnU-Net: A Self-Configuring Method for Deep Learning-Based Biomedical Image Segmentation. Nat. Methods 2021, 18, 203–211. [Google Scholar] [CrossRef]
  40. Longuefosse, A.; Bot, E.L.; De Senneville, B.D.; Giraud, R.; Mansencal, B.; Coupé, P.; Desbarats, P.; Baldacci, F. Adapted nnU-Net: A robust baseline for cross-modality synthesis and medical image inpainting. In Simulation and Synthesis in Medical Imaging; Fernandez, V., Wolterink, J.M., Wiesner, D., Remedios, S., Zuo, L., Casamitjana, A., Eds.; Springer: Cham, Switzerland, 2025; pp. 24–33. [Google Scholar] [CrossRef]
  41. Liu, X.; Yang, R.; Xiong, T.; Yang, X.; Li, W.; Song, L.; Zhu, J.; Wang, M.; Cai, J.; Geng, L. CBCT-to-CT Synthesis for Cervical Cancer Adaptive Radiotherapy via U-Net-Based Model Hierarchically Trained with Hybrid Dataset. Cancers 2023, 15, 5479. [Google Scholar] [CrossRef]
  42. Yoganathan, S.A.; Aouadi, S.; Ahmed, S.; Paloor, S.; Torfeh, T.; Al-Hammadi, N.; Hammoud, R. Generating Synthetic Images from Cone Beam Computed Tomography Using Self-Attention Residual UNet for Head and Neck Radiotherapy. Phys. Imaging Radiat. Oncol. 2023, 28, 100512. [Google Scholar] [CrossRef]
  43. Wolterink, J.M.; Dinkla, A.M.; Savenije, M.H.F.; Seevinck, P.R.; van den Berg, C.A.T.; Isgum, I. Deep MR to CT synthesis using unpaired data. arXiv 2017, arXiv:1708.01155. [Google Scholar] [CrossRef]
  44. Liang, X.; Chen, L.; Nguyen, D.; Zhou, Z.; Gu, X.; Yang, M.; Wang, J.; Jiang, S. Generating Synthesized Computed Tomography (CT) from Cone-Beam Computed Tomography (CBCT) Using CycleGAN for Adaptive Radiation Therapy. Phys. Med. Biol. 2019, 64, 125002. [Google Scholar] [CrossRef] [PubMed]
  45. Cusumano, D.; Lenkowicz, J.; Votta, C.; Boldrini, L.; Placidi, L.; Catucci, F.; Dinapoli, N.; Antonelli, M.V.; Romano, A.; De Luca, V.; et al. A Deep Learning Approach to Generate Synthetic CT in Low Field MR-Guided Adaptive Radiotherapy for Abdominal and Pelvic Cases. Radiother. Oncol. 2020, 153, 205–212. [Google Scholar] [CrossRef] [PubMed]
  46. Liu, Y.; Lei, Y.; Wang, T.; Fu, Y.; Tang, X.; Curran, W.J.; Liu, T.; Patel, P.; Yang, X. CBCT-Based Synthetic CT Generation Using Deep-Attention cycleGAN for Pancreatic Adaptive Radiotherapy. Med. Phys. 2020, 47, 2472–2483. [Google Scholar] [CrossRef] [PubMed]
  47. Zhang, Y.; Yue, N.; Su, M.-Y.; Liu, B.; Ding, Y.; Zhou, Y.; Wang, H.; Kuang, Y.; Nie, K. Improving CBCT Quality to CT Level Using Deep Learning with Generative Adversarial Network. Med. Phys. 2021, 48, 2816–2826. [Google Scholar] [CrossRef]
  48. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
  49. Rusanov, B.; Hassan, G.M.; Reynolds, M.; Sabet, M.; Kendrick, J.; Rowshanfarzad, P.; Ebert, M. Deep Learning Methods for Enhancing Cone-Beam CT Image Quality toward Adaptive Radiation Therapy: A Systematic Review. Med. Phys. 2022, 49, 6019–6054. [Google Scholar] [CrossRef]
  50. Zhang, Y.; Ding, S.; Gong, X.; Yuan, X.; Lin, J.; Chen, Q.; Li, J. Generating Synthesized Computed Tomography from CBCT Using a Conditional Generative Adversarial Network for Head and Neck Cancer Patients. Technol. Cancer Res. Treat. 2022, 21, 15330338221085358. [Google Scholar] [CrossRef]
  51. Lenkowicz, J.; Votta, C.; Nardini, M.; Quaranta, F.; Catucci, F.; Boldrini, L.; Vagni, M.; Menna, S.; Placidi, L.; Romano, A.; et al. A Deep Learning Approach to Generate Synthetic CT in Low Field MR-Guided Radiotherapy for Lung Cases. Radiother. Oncol. 2022, 176, 31–38. [Google Scholar] [CrossRef]
  52. Pang, B.; Si, H.; Liu, M.; Fu, W.; Zeng, Y.; Liu, H.; Cao, T.; Chang, Y.; Quan, H.; Yang, Z. Comparison and Evaluation of Different Deep Learning Models of Synthetic CT Generation from CBCT for Nasopharynx Cancer Adaptive Proton Therapy. Med. Phys. 2023, 50, 6920–6930. [Google Scholar] [CrossRef]
  53. Zhu, J.Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar] [CrossRef]
  54. Gao, L.; Xie, K.; Wu, X.; Lu, Z.; Li, C.; Sun, J.; Lin, T.; Sui, J.; Ni, X. Generating Synthetic CT from Low-Dose Cone-Beam CT by Using Generative Adversarial Networks for Adaptive Radiotherapy. Radiat. Oncol. 2021, 16, 202. [Google Scholar] [CrossRef]
  55. Brou Boni, K.N.D.; Klein, J.; Vanquin, L.; Wagner, A.; Lacornerie, T.; Pasquier, D.; Reynaert, N. MR to CT Synthesis with Multicenter Data in the Pelvic Area Using a Conditional Generative Adversarial Network. Phys. Med. Biol. 2020, 65, 075002. [Google Scholar] [CrossRef]
  56. Fetty, L.; Löfstedt, T.; Heilemann, G.; Furtado, H.; Nesvacil, N.; Nyholm, T.; Georg, D.; Kuess, P. Investigating Conditional GAN Performance with Different Generator Architectures, an Ensemble Model, and Different MR Scanners for MR-sCT Conversion. Phys. Med. Biol. 2020, 65, 105004. [Google Scholar] [CrossRef]
  57. Qiu, R.L.J.; Lei, Y.; Shelton, J.; Higgins, K.; Bradley, J.D.; Curran, W.J.; Liu, T.; Kesarwala, A.H.; Yang, X. Deep Learning-Based Thoracic CBCT Correction with Histogram Matching. Biomed. Phys. Eng. Express 2021, 7, 065040. [Google Scholar] [CrossRef]
  58. Bdair, T.; Saadeh, H.; Qaqish, B.; Sulaq, A.; Rawashdeh, M. Medical Image-to-Image Translation with Spatial Self-Attention for Radiotherapy in Federated Learning. In Proceedings of the 2024 Fifth International Conference on Intelligent Data Science Technologies and Applications (IDSTA), Dubrovnik, Croatia, 24–27 September 2024; pp. 103–110. [Google Scholar] [CrossRef]
  59. Cao, N.; Wang, Z.; Ding, J.; Zhang, H.; Zhang, S.; Gao, L.; Sun, J.; Xie, K.; Ni, X. A 4D-CBCT Correction Network Based on Contrastive Learning for Dose Calculation in Lung Cancer. Radiat. Oncol. 2024, 19, 20. [Google Scholar] [CrossRef]
  60. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
  61. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2021, arXiv:2010.11929. [Google Scholar]
  62. Chen, X.; Liu, Y.; Yang, B.; Zhu, J.; Yuan, S.; Xie, X.; Liu, Y.; Dai, J.; Men, K. A More Effective CT Synthesizer Using Transformers for Cone-Beam CT-Guided Adaptive Radiotherapy. Front. Oncol. 2022, 12, 988800. [Google Scholar] [CrossRef] [PubMed]
  63. Yang, B.; Liu, Y.; Zhu, J.; Dai, J.; Men, K. Deep Learning Framework to Improve the Quality of Cone-Beam Computed Tomography for Radiotherapy Scenarios. Med. Phys. 2023, 50, 7641–7653. [Google Scholar] [CrossRef] [PubMed]
  64. Saint-Esteven, A.L.G. Synthetic Computed Tomography for Low-Field Magnetic Resonance-Only Radiotherapy in Head-and-Neck Cancer Using Residual Vision Transformers. Phys. Imaging Radiat. Oncol. 2023, 27, 100471. [Google Scholar] [CrossRef] [PubMed]
  65. Zhao, R.; Qi, J.; Li, R.; Yang, T.; Li, J.; Zhang, J.; Zhang, Z. HFGS: High-Frequency Information Guided Net for Multi-Regions Pseudo-CT Synthesis. In Proceedings of the 2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Lisboa, Portugal, 3–6 December 2024; pp. 2957–2964. [Google Scholar] [CrossRef]
  66. Chen, X.; Qiu, R.L.J.; Peng, J.; Shelton, J.W.; Chang, C.-W.; Yang, X.; Kesarwala, A.H. CBCT-Based Synthetic CT Image Generation Using a Diffusion Model for CBCT-Guided Lung Radiotherapy. Med. Phys. 2024, 51, 8168–8178. [Google Scholar] [CrossRef]
  67. Peng, J.; Qiu, R.L.J.; Wynne, J.F.; Chang, C.-W.; Pan, S.; Wang, T.; Roper, J.; Liu, T.; Patel, P.R.; Yu, D.S.; et al. CBCT-Based Synthetic CT Image Generation Using Conditional Denoising Diffusion Probabilistic Model. Med. Phys. 2024, 51, 1847–1859. [Google Scholar] [CrossRef]
  68. Li, Y.; Shao, H.-C.; Liang, X.; Chen, L.; Li, R.; Jiang, S.; Wang, J.; Zhang, Y. Zero-Shot Medical Image Translation via Frequency-Guided Diffusion Models. IEEE Trans. Med. Imaging 2024, 43, 980–993. [Google Scholar] [CrossRef] [PubMed]
  69. Lyu, Q.; Wang, G. Conversion between CT and MRI images using diffusion and score-matching models. arXiv 2022, arXiv:2212.05400. [Google Scholar] [CrossRef]
  70. Fu, L.; Li, X.; Cai, X.; Miao, D.; Yao, Y.; Shen, Y. Energy-Guided Diffusion Model for CBCT-to-CT Synthesis. Comput. Med. Imaging Graph. 2024, 113, 102344. [Google Scholar] [CrossRef] [PubMed]
  71. Li, M.; Li, X.; Safai, S.; Lomax, A.J.; Zhang, Y. Diffusion Schrödinger Bridge Models for High-Quality MR-to-CT Synthesis for Proton Treatment Planning. Med. Phys. 2025, 52, e17898. [Google Scholar] [CrossRef]
  72. Gong, C.; Jian, J.; Huang, Y.; Luo, M.; Ding, S.; Yuan, X.; Wang, X.; Zhang, Y. Boundary Information-Guided Adversarial Diffusion Model for Efficient Unsupervised Synthetic CT Generation. Med. Phys. 2025, 52, 4675–4693. [Google Scholar] [CrossRef]
  73. Yuan, M.; Xie, Y.; Zhao, R.; Lv, N.; Zhang, Z.; Zhu, L.; Wu, X. Generating Synthesized Computed Tomography from CBCT/LDCT Using a Novel Generative-Transformer Adversarial-CNN. Biomed. Signal Process. Control 2024, 96, 106660. [Google Scholar] [CrossRef]
  74. Hu, C.; Cao, N.; Li, X.; He, Y.; Zhou, H. CBCT-to-CT Synthesis Using a Hybrid U-Net Diffusion Model Based on Transformers and Information Bottleneck Theory. Sci. Rep. 2025, 15, 10816. [Google Scholar] [CrossRef]
  75. Pan, S.; Abouei, E.; Wynne, J.; Chang, C.-W.; Wang, T.; Qiu, R.L.J.; Li, Y.; Peng, J.; Roper, J.; Patel, P.; et al. Synthetic CT Generation from MRI Using 3D Transformer-Based Denoising Diffusion Model. Med. Phys. 2024, 51, 2538–2548. [Google Scholar] [CrossRef]
  76. Viar-Hernandez, D.; Manuel Molina-Maza, J.; Pan, S.; Salari, E.; Chang, C.-W.; Eidex, Z.; Zhou, J.; Antonio Vera-Sanchez, J.; Rodriguez-Vila, B.; Malpica, N.; et al. Exploring Dual Energy CT Synthesis in CBCT-Based Adaptive Radiotherapy and Proton Therapy: Application of Denoising Diffusion Probabilistic Models. Phys. Med. Biol. 2024, 69, 215011. [Google Scholar] [CrossRef]
  77. Zhou, X.; Wu, J.; Zhao, H.; Chen, L.; Zhang, S.; Wang, G. GLFC: Unified global-local feature and contrast learning with Mamba-enhanced UNet for synthetic CT generation from CBCT. arXiv 2025, arXiv:2503.04567. [Google Scholar]
  78. Hu, Y.; Zhou, H.; Cao, N.; Li, C.; Hu, C. Synthetic CT Generation Based on CBCT Using Improved Vision Transformer CycleGAN. Sci. Rep. 2024, 14, 11455. [Google Scholar] [CrossRef]
  79. Rusanov, B.; Hassan, G.M.; Reynolds, M.; Sabet, M.; Rowshanfarzad, P.; Bucknell, N.; Gill, S.; Dass, J.; Ebert, M. Transformer CycleGAN with Uncertainty Estimation for CBCT Based Synthetic CT in Adaptive Radiotherapy. Phys. Med. Biol. 2024, 69, 035014. [Google Scholar] [CrossRef] [PubMed]
  80. Fu, J.; Yang, Y.; Singhrao, K.; Ruan, D.; Chu, F.-I.; Low, D.A.; Lewis, J.H. Deep Learning Approaches Using 2D and 3D Convolutional Neural Networks for Generating Male Pelvic Synthetic Computed Tomography from Magnetic Resonance Imaging. Med. Phys. 2019, 46, 3788–3798. [Google Scholar] [CrossRef] [PubMed]
  81. Liu, Y.; Chen, A.; Shi, H.; Huang, S.; Zheng, W.; Liu, Z.; Zhang, Q.; Yang, X. CT Synthesis from MRI Using Multi-Cycle GAN for Head-and-Neck Radiation Therapy. Comput. Med. Imaging Graph. 2021, 91, 101953. [Google Scholar] [CrossRef]
  82. Zimmermann, L.; Knäusl, B.; Stock, M.; Lütgendorf-Caucig, C.; Georg, D.; Kuess, P. An MRI Sequence Independent Convolutional Neural Network for Synthetic Head CT Generation in Proton Therapy. Z. Für Med. Phys. 2022, 32, 218–227. [Google Scholar] [CrossRef] [PubMed]
  83. Tschuchnig, M.; Lamminger, L.; Steininger, P.; Gadermayr, M. Enhancing synthetic CT from CBCT via multimodal fusion and end-to-end registration. arXiv 2025, arXiv:2504.12345. [Google Scholar]
  84. Neppl, S.; Landry, G.; Kurz, C.; Hansen, D.C.; Hoyle, B.; Stöcklein, S.; Seidensticker, M.; Weller, J.; Belka, C.; Parodi, K.; et al. Evaluation of Proton and Photon Dose Distributions Recalculated on 2D and 3D Unet-Generated pseudoCTs from T1-Weighted MR Head Scans. Acta Oncol. 2019, 58, 1429–1434. [Google Scholar] [CrossRef]
  85. Sun, B.; Jia, S.; Jiang, X.; Jia, F. Double U-Net CycleGAN for 3D MR to CT Image Synthesis. Int. J. CARS 2023, 18, 149–156. [Google Scholar] [CrossRef]
  86. Spadea, M.F.; Pileggi, G.; Zaffino, P.; Salome, P.; Catana, C.; Izquierdo-Garcia, D.; Amato, F.; Seco, J. Deep Convolution Neural Network (DCNN) Multiplane Approach to Synthetic CT Generation from MR images—Application in Brain Proton Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2019, 105, 495–503. [Google Scholar] [CrossRef]
  87. Maspero, M.; Bentvelzen, L.G.; Savenije, M.H.F.; Guerreiro, F.; Seravalli, E.; Janssens, G.O.; van den Berg, C.A.T.; Philippens, M.E.P. Deep Learning-Based Synthetic CT Generation for Paediatric Brain MR-Only Photon and Proton Radiotherapy. Radiother. Oncol. 2020, 153, 197–204. [Google Scholar] [CrossRef]
  88. Dayarathna, S.; Islam, K.T.; Uribe, S.; Yang, G.; Hayat, M.; Chen, Z. Deep Learning Based Synthesis of MRI, CT and PET: Review and Analysis. Med. Image Anal. 2024, 92, 103046. [Google Scholar] [CrossRef]
  89. Thummerer, A.; Zaffino, P.; Meijers, A.; Marmitt, G.G.; Seco, J.; Steenbakkers, R.J.H.M.; Langendijk, J.A.; Both, S.; Spadea, M.F.; Knopf, A.C. Comparison of CBCT Based Synthetic CT Methods Suitable for Proton Dose Calculations in Adaptive Proton Therapy. Phys. Med. Biol. 2020, 65, 095002. [Google Scholar] [CrossRef]
  90. Brou Boni, K.N.D.; Klein, J.; Gulyban, A.; Reynaert, N.; Pasquier, D. Improving Generalization in MR-to-CT Synthesis in Radiotherapy by Using an Augmented Cycle Generative Adversarial Network with Unpaired Data. Med. Phys. 2021, 48, 3003–3010. [Google Scholar] [CrossRef] [PubMed]
  91. Zhou, T.; Luo, J.; Sun, Y.; Tan, Y.; Yao, S.; Haouchine, N.; Raymond, S. Path and bone-contour regularized unpaired MRI-to-CT translation. arXiv 2025, arXiv:2502.08765. [Google Scholar] [CrossRef] [PubMed]
  92. Ryu, S.; Kim, J.H.; Choi, Y.J.; Lee, J.S. Generating Synthetic CT Images from Unpaired Head and Neck CBCT Images and Validating the Importance of Detailed Nasal Cavity Acquisition through Simulations. Comput. Biol. Med. 2025, 185, 109568. [Google Scholar] [CrossRef] [PubMed]
  93. Yang, H.; Sun, J.; Carass, A.; Zhao, C.; Lee, J.; Prince, J.L.; Xu, Z. Unsupervised MR-to-CT Synthesis Using Structure-Constrained CycleGAN. IEEE Trans. Med. Imaging 2020, 39, 4249–4261. [Google Scholar] [CrossRef]
  94. Peng, J.; Gao, Y.; Chang, C.-W.; Qiu, R.; Wang, T.; Kesarwala, A.; Yang, K.; Scott, J.; Yu, D.; Yang, X. Unsupervised Bayesian Generation of Synthetic CT from CBCT Using Patient-Specific Score-Based Prior. Med. Phys. 2024, 52, 2238–2246. [Google Scholar] [CrossRef]
  95. Zhang, Y.; Li, C.; Dai, Z.; Zhong, L.; Wang, X.; Yang, W. Breath-Hold CBCT-Guided CBCT-to-CT Synthesis via Multimodal Unsupervised Representation Disentanglement Learning. IEEE Trans. Med. Imaging 2023, 42, 2313–2324. [Google Scholar] [CrossRef]
  96. Zhu, S.; Wu, Z.; Zhang, Z.; Shu, H.; Xie, S.; Coatrieux, J.-L.; Chen, Y. Planning CT Guided Limited-Angle CBCT to CT Synthesis via Content-Style Decoupled Learning. IEEE Trans. Instrum. Meas. 2025, 74, 4502514. [Google Scholar] [CrossRef]
  97. Szmul, A.; Jayaprakash, K.T.; Jena, R.; Hoole, A.; Veiga, C.; Hu, Y.; McClelland, J.R. MAGIC: Multitask adversarial generator of images and contours from CBCT for adaptive radiotherapy. arXiv 2024, arXiv:2405.09113. [Google Scholar]
  98. Zeng, H.; E, X.; Lv, M.; Zeng, S.; Feng, Y.; Shen, W.; Guan, W.; Zhang, Y.; Zhao, R.; Yu, J. Deep Learning-Based Synthetic CT for Dosimetric Monitoring of Combined Conventional Radiotherapy and Lattice Boost in Large Lung Tumors. Radiat. Oncol. 2025, 20, 12. [Google Scholar] [CrossRef]
  99. Kaushik, S.S.; Bylund, M.; Cozzini, C.; Shanbhag, D.; Petit, S.F.; Wyatt, J.J.; Menzel, M.I.; Pirkl, C.; Mehta, B.; Chauhan, V.; et al. Region of Interest Focused MRI to Synthetic CT Translation Using Regression and Segmentation Multi-Task Network. Phys. Med. Biol. 2023, 68, 195003. [Google Scholar] [CrossRef] [PubMed]
  100. Longuefosse, A.; Denis de Senneville, B.; Dournes, G.; Benlala, I.; Baldacci, F.; Desbarats, P. Anatomical Feature-Prioritized Loss for Enhanced MR to CT Translation. Phys. Med. Biol. 2025, 70, 145012. [Google Scholar] [CrossRef] [PubMed]
  101. Choi, Y.; Lee, S. CycleGAN with Multi-Scale Block and Attention Gate for Synthesizing CT Image in Adaptive Radiotherapy. In Proceedings of the Medical Imaging 2025: Physics of Medical Imaging; Sabol, J.M., Abbaszadeh, S., Li, K., Eds.; SPIE: San Diego, CA, USA, 2025; p. 71. [Google Scholar] [CrossRef]
  102. Yin, S.; Tan, H.; Chong, L.M.; Liu, H.; Liu, H.; Lee, K.H.; Tuan, J.K.L.; Ho, D.; Jin, Y. HC3L-Diff: Hybrid conditional latent diffusion with high frequency enhancement for CBCT-to-CT synthesis. arXiv 2024, arXiv:2407.01289. [Google Scholar]
  103. Zhang, Y.; Li, L.; Wang, J.; Yang, X.; Zhou, H.; He, J.; Xie, Y.; Jiang, Y.; Sun, W.; Zhang, X.; et al. Texture-Preserving Diffusion Model for CBCT-to-CT Synthesis. Med. Image Anal. 2025, 99, 103362. [Google Scholar] [CrossRef]
  104. Sun, H.; Sun, X.; Li, J.; Zhu, J.; Yang, Z.; Meng, F.; Liu, Y.; Gong, J.; Wang, Z.; Yin, Y.; et al. Pseudo-CT Synthesis in Adaptive Radiotherapy Based on a Stacked Coarse-to-Fine Model: Combing Diffusion Process and Spatial-Frequency Convolutions. Med. Phys. 2024, 51, 8979–8998. [Google Scholar] [CrossRef]
  105. Luo, F.; Ma, C.; Xu, K. CBCT-to-CT synthesis with a hybrid of CycleGAN and latent diffusion. Neuroradiology 2025, 67, 1123–1132. [Google Scholar] [CrossRef]
  106. Emami, H.; Dong, M.; Nejad-Davarani, S.; Glide-Hurst, C. SA-GAN: Structure-aware GAN for organ-preserving synthetic CT generation. arXiv 2021, arXiv:2102.05684. [Google Scholar]
  107. Poch, D.V.; Estievenart, Y.; Zhalieva, E.; Patra, S.; Yaqub, M.; Ben Taieb, S. Segmentation-guided CT synthesis with pixel-wise conformal uncertainty bounds. arXiv 2025, arXiv:2503.08515. Available online: https://arxiv.org/abs/2503.08515 (accessed on 20 October 2025).
  108. Yu, Z.; Zhao, B.; Zhang, S.; Chen, X.; Yan, F.; Feng, J.; Peng, T.; Zhang, X.-Y. HiFi-Syn: Hierarchical Granularity Discrimination for High-Fidelity Synthesis of MR Images with Structure Preservation. Med. Image Anal. 2025, 100, 103390. [Google Scholar] [CrossRef]
  109. Phan, V.M.H.; Xie, Y.; Zhang, B.; Qi, Y.; Liao, Z.; Perperidis, A.; Phung, S.L.; Verjans, J.W.; To, M.-S. Structural Attention: Rethinking Transformer for Unpaired Medical Image Synthesis. In Medical Image Computing and Computer Assisted Interventio—MICCAI 2024; Linguraru, M.G., Dou, Q., Feragen, A., Giannarou, S., Glocker, B., Lekadir, K., Schnabel, J.A., Eds.; Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2024; Volume 15007, pp. 690–700. ISBN 978-3-031-72103-8. [Google Scholar]
  110. Raggio, C.B.; Zabaleta, M.K.; Skupien, N.; Blanck, O.; Cicone, F.; Cascini, G.L.; Zaffino, P.; Migliorelli, L.; Spadea, M.F. FedSynthCT-Brain: A Federated Learning Framework for Multi-Institutional Brain MRI-to-CT Synthesis. Comput. Biol. Med. 2025, 192, 110160. [Google Scholar] [CrossRef] [PubMed]
  111. Raggio, C.B.; Zaffino, P.; Spadea, M.F. A privacy-preserving federated learning framework for generalizable CBCT-to-synthetic CT translation in head and neck radiotherapy. arXiv 2025, arXiv:2503.09870. [Google Scholar]
  112. Eckl, M.; Hoppen, L.; Sarria, G.R.; Boda-Heggemann, J.; Simeonova-Chergou, A.; Steil, V.; Giordano, F.A.; Fleckenstein, J. Evaluation of a cycle-generative adversarial network-based cone-beam CT to synthetic CT conversion algorithm for adaptive radiation therapy. Phys. Med. 2020, 80, 308–316. [Google Scholar] [CrossRef] [PubMed]
  113. de Koster, R.J.C.; Thummerer, A.; Scandurra, D.; Langendijk, J.A.; Both, S. Technical Note: Evaluation of Deep Learning Based Synthetic CTs Clinical Readiness for Dose and NTCP Driven Head and Neck Adaptive Proton Therapy. Med. Phys. 2023, 50, 8023–8033. [Google Scholar] [CrossRef]
  114. Maspero, M.; Houweling, A.C.; Savenije, M.H.F.; van Heijst, T.C.F.; Verhoeff, J.J.C.; Kotte, A.N.T.J.; van den Berg, C.A.T. A Single Neural Network for Cone-Beam Computed Tomography-Based Radiotherapy of Head-and-Neck, Lung and Breast Cancer. Phys. Imaging Radiat. Oncol. 2020, 14, 24–31. [Google Scholar] [CrossRef]
  115. Chen, L.; Liang, X.; Shen, C.; Nguyen, D.; Jiang, S.; Wang, J. Synthetic CT Generation from CBCT Images via Unsupervised Deep Learning. Phys. Med. Biol. 2021, 66, 115019. [Google Scholar] [CrossRef]
  116. Vellini, L.; Zucca, S.; Lenkowicz, J.; Menna, S.; Catucci, F.; Quaranta, F.; Pilloni, E.; D’Aviero, A.; Aquilano, M.; Di Dio, C.; et al. A Deep Learning Approach for the Fast Generation of Synthetic Computed Tomography from Low-Dose Cone Beam Computed Tomography Images on a Linear Accelerator Equipped with Artificial Intelligence. Appl. Sci. 2024, 14, 4844. [Google Scholar] [CrossRef]
  117. Harms, J.; Lei, Y.; Wang, T.; Zhang, R.; Zhou, J.; Tang, X.; Curran, W.J.; Liu, T.; Yang, X. Paired Cycle-GAN-Based Image Correction for Quantitative Cone-Beam Computed Tomography. Med. Phys. 2019, 46, 3998–4009. [Google Scholar] [CrossRef]
  118. Khamfongkhruea, C.; Prakarnpilas, T.; Thongsawad, S.; Deeharing, A.; Chanpanya, T.; Mundee, T.; Suwanbut, P.; Nimjaroen, K. Supervised Deep Learning-Based Synthetic Computed Tomography from Kilovoltage Cone-Beam Computed Tomography Images for Adaptive Radiation Therapy in Head and Neck Cancer. Radiat. Oncol. J. 2024, 42, 181–191. [Google Scholar] [CrossRef]
  119. Vellini, L.; Quaranta, F.; Menna, S.; Pilloni, E.; Catucci, F.; Lenkowicz, J.; Votta, C.; Aquilano, M.; D’Aviero, A.; Iezzi, M.; et al. A Deep Learning Algorithm to Generate Synthetic Computed Tomography Images for Brain Treatments from 0.35 T Magnetic Resonance Imaging. Phys. Imaging Radiat. Oncol. 2025, 33, 100708. [Google Scholar] [CrossRef]
  120. Haidari, M.; Ali, E.; Granville, D. Towards real-time conformal palliative treatment of spine metastases: A deep learning approach for Hounsfield unit recovery of cone-beam CT images. Med. Phys. 2025, 52, e17838. [Google Scholar] [CrossRef] [PubMed]
  121. Thummerer, A.; Seller Oria, C.; Zaffino, P.; Visser, S.; Meijers, A.; Guterres Marmitt, G.; Wijsman, R.; Seco, J.; Langendijk, J.A.; Knopf, A.C.; et al. Deep Learning–Based 4D-Synthetic CTs from Sparse-View CBCTs for Dose Calculations in Adaptive Proton Therapy. Med. Phys. 2022, 49, 6824–6839. [Google Scholar] [CrossRef] [PubMed]
  122. Hoffmans-Holtzer, N.; Magallon-Baro, A.; de Pree, I.; Slagter, C.; Xu, J.; Thill, D.; Acht, M.O.; Hoogeman, M.; Petit, S. Evaluating AI-Generated CBCT-Based Synthetic CT Images for Target Delineation in Palliative Treatments of Pelvic Bone Metastasis at Conventional C-Arm Linacs. Radiother. Oncol. 2024, 192, 110110. [Google Scholar] [CrossRef] [PubMed]
  123. Heo, J.; Yoon, Y.; Han, H.J.; Kim, J.; Park, K.Y.; Kim, B.M.; Kim, D.J.; Kim, Y.D.; Nam, H.S.; Lee, S.-K.; et al. Prediction of Cerebral Hemorrhagic Transformation after Thrombectomy Using a Deep Learning of Dual-Energy CT. Eur. Radiol. 2024, 34, 3840–3848. [Google Scholar] [CrossRef]
  124. Gao, Y.; Qiu, R.L.J.; Xie, H.; Chang, C.-W.; Wang, T.; Ghavidel, B.; Roper, J.; Zhou, J.; Yang, X. CT-Based Synthetic Contrast-Enhanced Dual-Energy CT Generation Using Conditional Denoising Diffusion Probabilistic Model. Phys. Med. Biol. 2024, 69, 165015. [Google Scholar] [CrossRef]
  125. Chen, X.; Qiu, R.L.J.; Wang, T.; Chang, C.-W.; Chen, X.; Shelton, J.W.; Kesarwala, A.H.; Yang, X. Using a Patient-Specific Diffusion Model to Generate CBCT-Based Synthetic CTs for CBCT-Guided Adaptive Radiotherapy. Med. Phys. 2025, 52, 471–480. [Google Scholar] [CrossRef]
  126. Amirian, M.; Barco, D.; Herzig, I.; Schilling, F.-P. Artifact Reduction in 3D and 4D Cone-Beam Computed Tomography Images with Deep Learning: A Review. IEEE Access 2024, 12, 10281–10295. [Google Scholar] [CrossRef]
  127. Gao, L.; Xie, K.; Sun, J.; Lin, T.; Sui, J.; Yang, G.; Ni, X. Streaking Artifact Reduction for CBCT-Based Synthetic CT Generation in Adaptive Radiotherapy. Med. Phys. 2023, 50, 879–893. [Google Scholar] [CrossRef]
  128. Brioso, R.C.; Crespi, L.; Seghetto, A.; Dei, D.; Lambri, N.; Mancosu, P.; Scorsetti, M.; Loiacono, D. ARTInp: CBCT-to-CT image inpainting and image translation in radiotherapy. arXiv 2025, arXiv:2505.01294. [Google Scholar]
  129. Taasti, V.T.; Hattu, D.; Peeters, S.; van der Salm, A.; van Loon, J.; de Ruysscher, D.; Nilsson, R.; Andersson, S.; Engwall, E.; Unipan, M.; et al. Clinical Evaluation of Synthetic Computed Tomography Methods in Adaptive Proton Therapy of Lung Cancer Patients. Phys. Imaging Radiat. Oncol. 2023, 27, 100459. [Google Scholar] [CrossRef]
  130. Tulip, R.; Andersson, S.; Chuter, R.; Manolopoulos, S. Synthetic Computed Tomography Generation Using Deep-Learning for Female Pelvic Radiotherapy Planning. Phys. Imaging Radiat. Oncol. 2025, 33, 100719. [Google Scholar] [CrossRef]
  131. Zhong, L.; Chen, Z.; Shu, H.; Zheng, K.; Li, Y.; Chen, W.; Wu, Y.; Ma, J.; Feng, Q.; Yang, W. Multi-Scale Tokens-Aware Transformer Network for Multi-Region and Multi-Sequence MR-to-CT Synthesis in a Single Model. IEEE Trans. Med. Imaging 2024, 43, 794–806. [Google Scholar] [CrossRef] [PubMed]
  132. Bird, D.; Nix, M.G.; McCallum, H.; Teo, M.; Gilbert, A.; Casanova, N.; Cooper, R.; Buckley, D.L.; Sebag-Montefiore, D.; Speight, R.; et al. Multicentre, Deep Learning, Synthetic-CT Generation for Ano-Rectal MR-Only Radiotherapy Treatment Planning. Radiother. Oncol. 2021, 156, 23–28. [Google Scholar] [CrossRef] [PubMed]
  133. Texier, B.; Hémon, C.; Lekieffre, P.; Collot, E.; Tahri, S.; Chourak, H.; Dowling, J.; Greer, P.; Bessieres, I.; Acosta, O.; et al. Computed Tomography Synthesis from Magnetic Resonance Imaging Using Cycle Generative Adversarial Networks with Multicenter Learning. Phys. Imaging Radiat. Oncol. 2023, 28, 100511. [Google Scholar] [CrossRef] [PubMed]
  134. Cao, C.; Wang, D.; Chung, C.; Tian, D.; Rimner, A.; Huang, J.; Jones, D.R. A Systematic Review and Meta-Analysis of Stereotactic Body Radiation Therapy versus Surgery for Patients with Non-Small Cell Lung Cancer. J. Thorac. Cardiovasc. Surg. 2019, 157, 362–373.e8. [Google Scholar] [CrossRef]
  135. Tyldesley, S.; Boyd, C.; Schulze, K.; Walker, H.; Mackillop, W.J. Estimating the Need for Radiotherapy for Lung Cancer: An Evidence-Based, Epidemiologic Approach. Int. J. Radiat. Oncol. Biol. Phys. 2001, 49, 973–985. [Google Scholar] [CrossRef]
  136. Voong, K.R.; Hazell, S.Z.; Fu, W.; Hu, C.; Lin, C.T.; Ding, K.; Suresh, K.; Hayman, J.; Hales, R.K.; Alfaifi, S.; et al. Relationship Between Prior Radiotherapy and Checkpoint-Inhibitor Pneumonitis in Patients with Advanced Non–Small-Cell Lung Cancer. Clin. Lung Cancer 2019, 20, e470–e479. [Google Scholar] [CrossRef]
  137. Tandberg, D.J.; Tong, B.C.; Ackerson, B.G.; Kelsey, C.R. Surgery versus Stereotactic Body Radiation Therapy for Stage I Non-Small Cell Lung Cancer: A Comprehensive Review. Cancer 2018, 124, 667–678. [Google Scholar] [CrossRef]
  138. Herbst, R.S.; Morgensztern, D.; Boshoff, C. The Biology and Management of Non-Small Cell Lung Cancer. Nature 2018, 553, 446–454. [Google Scholar] [CrossRef]
  139. Hanna, T.P.; King, W.D.; Thibodeau, S.; Jalink, M.; Paulin, G.A.; Harvey-Jones, E.; O’Sullivan, D.E.; Booth, C.M.; Sullivan, R.; Aggarwal, A. Mortality Due to Cancer Treatment Delay: Systematic Review and Meta-Analysis. BMJ 2020, 371, m4087. [Google Scholar] [CrossRef]
  140. Mohammed, N.; Kestin, L.L.; Grills, I.S.; Battu, M.; Fitch, D.L.; Wong, C.-Y.O.; Margolis, J.H.; Chmielewski, G.W.; Welsh, R.J. Rapid Disease Progression with Delay in Treatment of Non-Small-Cell Lung Cancer. Int. J. Radiat. Oncol. Biol. Phys. 2011, 79, 466–472. [Google Scholar] [CrossRef]
  141. Chen, C.P.; Weinberg, V.K.; Jahan, T.M.; Jablons, D.M.; Yom, S.S. Implications of Delayed Initiation of Radiotherapy: Accelerated Repopulation after Induction Chemotherapy for Stage III Non-Small Cell Lung Cancer. J. Thorac. Oncol. 2011, 6, 1857–1864. [Google Scholar] [CrossRef] [PubMed]
  142. Curran, W.J.; Paulus, R.; Langer, C.J.; Komaki, R.; Lee, J.S.; Hauser, S.; Movsas, B.; Wasserman, T.; Rosenthal, S.A.; Gore, E.; et al. Sequential vs. Concurrent Chemoradiation for Stage III Non-Small Cell Lung Cancer: Randomized Phase III Trial RTOG 9410. J. Natl. Cancer Inst. 2011, 103, 1452–1460. [Google Scholar] [CrossRef] [PubMed]
  143. Salomaa, E.-R.; Sällinen, S.; Hiekkanen, H.; Liippo, K. Delays in the Diagnosis and Treatment of Lung Cancer. Chest 2005, 128, 2282–2288. [Google Scholar] [CrossRef] [PubMed]
  144. Rong, Y.; Tegtmeier, R.; Clouser, E.L.; Vora, S.A.; Lin, C.-S.; Mackie, T.R.; Timmerman, R.; Lin, M.-H. Advancements in Radiation Therapy Treatment Workflows for Precision Medicine: A Review and Forward Looking. Int. J. Radiat. Oncol. Biol. Phys. 2025, 122, 1022–1034. [Google Scholar] [CrossRef]
  145. O’Neil, M.; Laba, J.M.; Nguyen, T.K.; Lock, M.; Goodman, C.D.; Huynh, E.; Snir, J.; Munro, V.; Alce, J.; Schrijver, L.; et al. Diagnostic CT-Enabled Planning (DART): Results of a Randomized Trial in Palliative Radiation Therapy. Int. J. Radiat. Oncol. Biol. Phys. 2024, 120, 69–76. [Google Scholar] [CrossRef]
  146. Hooshangnejad, H.; Chen, Q.; Feng, X.; Zhang, R.; Ding, K. deepPERFECT: Novel Deep Learning CT Synthesis Method for Expeditious Pancreatic Cancer Radiotherapy. arXiv 2023, arXiv:2301.11085v2. [Google Scholar] [CrossRef]
  147. Hooshangnejad, H.; Chen, Q.; Feng, X.; Zhang, R.; Farjam, R.; Voong, K.R.; Hales, R.K.; Du, Y.; Jia, X.; Ding, K. DAART: A Deep Learning Platform for Deeply Accelerated Adaptive Radiation Therapy for Lung Cancer. Front. Oncol. 2023, 13, 1201679. [Google Scholar] [CrossRef]
  148. Zhu, L.; Yu, N.Y.; Ahmed, S.K.; Ashman, J.B.; Toesca, D.S.; Grams, M.P.; Deufel, C.L.; Duan, J.; Chen, Q.; Rong, Y. Simulation-Free Workflow for Lattice Radiation Therapy Using Deep Learning Predicted Synthetic Computed Tomography: A Feasibility Study. J. Appl. Clin. Med. Phys. 2025, 26, e70137. [Google Scholar] [CrossRef]
  149. Wongtrakool, P.; Puttanawarut, C.; Changkaew, P.; Piasanthia, S.; Earwong, P.; Stansook, N.; Khachonkham, S. Synthetic CT Generation from CBCT and MRI Using StarGAN in the Pelvic Region. Radiat. Oncol. 2025, 20, 18. [Google Scholar] [CrossRef]
  150. Pang, Y.; Liu, Y.; Chen, X.; Yap, P.-T.; Lian, J. SinoSynth: A Physics-Based Domain Randomization Approach for Generalizable CBCT Image Enhancement. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2024; Linguraru, M.G., Dou, Q., Feragen, A., Giannarou, S., Glocker, B., Lekadir, K., Schnabel, J.A., Eds.; Springer Nature: Cham, Switzerland, 2024; pp. 646–656. [Google Scholar]
  151. Dahiya, N.; Alam, S.R.; Zhang, P.; Zhang, S.-Y.; Li, T.; Yezzi, A.; Nadeem, S. Multitask 3D CBCT-to-CT Translation and Organs-at-Risk Segmentation Using Physics-Based Data Augmentation. Med. Phys. 2021, 48, 5130–5141. [Google Scholar] [CrossRef]
  152. Yang, J.; Veeraraghavan, H.; Armato, S.G.; Farahani, K.; Kirby, J.S.; Kalpathy-Kramer, J.; Van Elmpt, W.; Dekker, A.; Han, X.; Feng, X.; et al. Autosegmentation for Thoracic Radiation Treatment Planning: A Grand Challenge at AAPM 2017. Med. Phys. 2018, 45, 4568–4581. [Google Scholar] [CrossRef]
  153. Edmund, J.M.; Nyholm, T. A Review of Substitute CT Generation for MRI-Only Radiation Therapy. Radiat. Oncol. 2017, 12, 28. [Google Scholar] [CrossRef] [PubMed]
  154. Hodapp, N. The ICRU Report 83: Prescribing, recording and reporting photon-beam intensity-modulated radiation therapy (IMRT). Strahlenther. Onkol. 2012, 188, 97–99. [Google Scholar] [CrossRef] [PubMed]
  155. Bentzen, S.M.; Constine, L.S.; Deasy, J.O.; Eisbruch, A.; Jackson, A.; Marks, L.B.; Haken, R.K.T.; Yorke, E.D. Quantitative Analyses of Normal Tissue Effects in the Clinic (QUANTEC): An Introduction to the Scientific Issues. Int. J. Radiat. Oncol. Biol. Phys. 2010, 76, S3–S9. [Google Scholar] [CrossRef]
  156. Hussein, M.; Clark, C.H.; Nisbet, A. Challenges in Calculation of the Gamma Index in Radiotherapy—Towards Good Practice. Phys. Medica Eur. J. Med. Phys. 2017, 36, 1–11. [Google Scholar] [CrossRef]
  157. Hsu, S.-H.; Han, Z.; Hu, Y.-H.; Ferguson, D.; van Dams, R.; Mak, R.H.; Leeman, J.E.; Sudhyadhom, A. Feasibility Study of a General Model for Synthetic CT Generation in MRI-Guided Extracranial Radiotherapy. Biomed. Phys. Eng. Express 2025, 11, 035028. [Google Scholar] [CrossRef]
  158. Aljaafari, L.; Speight, R.; Buckley, D.L.; Al-Qaisieh, B.; Andersson, S.; Bird, D. Evaluating the Dosimetric and Positioning Accuracy of a Deep Learning Based Synthetic-CT Model for Liver Radiotherapy Treatment Planning. Biomed. Phys. Eng. Express 2025, 11, 035014. [Google Scholar] [CrossRef]
  159. Kim, H.; Yoo, S.K.; Kim, J.S.; Kim, Y.T.; Lee, J.W.; Kim, C.; Hong, C.-S.; Lee, H.; Han, M.C.; Kim, D.W.; et al. Clinical Feasibility of Deep Learning-Based Synthetic CT Images from T2-Weighted MR Images for Cervical Cancer Patients Compared to MRCAT. Sci. Rep. 2024, 14, 8504. [Google Scholar] [CrossRef]
  160. Lerner, M.; Medin, J.; Jamtheim Gustafsson, C.; Alkner, S.; Siversson, C.; Olsson, L.E. Clinical Validation of a Commercially Available Deep Learning Software for Synthetic CT Generation for Brain. Radiat. Oncol. 2021, 16, 66. [Google Scholar] [CrossRef]
  161. Boily, C.; Mazellier, J.-P.; Meyer, P. Large Medical Image Database Impact on Generalizability of Synthetic CT Scan Generation. Comput. Biol. Med. 2025, 193, 110303. [Google Scholar] [CrossRef]
  162. Thummerer, A.; de Jong, B.A.; Zaffino, P.; Meijers, A.; Marmitt, G.G.; Seco, J.; Steenbakkers, R.J.H.M.; Langendijk, J.A.; Both, S.; Spadea, M.F.; et al. Comparison of the Suitability of CBCT- and MR-Based Synthetic CTs for Daily Adaptive Proton Therapy in Head and Neck Patients. Phys. Med. Biol. 2020, 65, 235036. [Google Scholar] [CrossRef] [PubMed]
  163. Wu, B.; Ricchetti, F.; Sanguineti, G.; Kazhdan, M.; Simari, P.; Jacques, R.; Taylor, R.; McNutt, T. Data-Driven Approach to Generating Achievable Dose–Volume Histogram Objectives in Intensity-Modulated Radiotherapy Planning. Int. J. Radiat. Oncol. Biol. Phys. 2011, 79, 1241–1247. [Google Scholar] [CrossRef] [PubMed]
  164. Hooshangnejad, H.; Youssefian, S.; Guest, J.K.; Ding, K. FEMOSSA: Patient-Specific Finite Element Simulation of the Prostate–Rectum Spacer Placement, a Predictive Model for Prostate Cancer Radiotherapy. Med. Phys. 2021, 48, 3438–3452. [Google Scholar] [CrossRef] [PubMed]
  165. Elyan, E.; Vuttipittayamongkol, P.; Johnston, P.; Martin, K.; McPherson, K.; Moreno-García, C.F.; Jayne, C.; Sarker, M.M.K. Computer Vision and Machine Learning for Medical Image Analysis: Recent Advances, Challenges, and Way Forward. Artif. Intell. Surg. 2022, 2, 24–45. [Google Scholar] [CrossRef]
  166. Colliot, O.; Thibeau-Sutre, E.; Burgos, N. Reproducibility in Machine Learning for Medical Imaging. In Machine Learning for Brain Disorders; Colliot, O., Ed.; Humana: New York, NY, USA, 2023; ISBN 978-1-0716-3194-2. [Google Scholar]
  167. Kitamura, F.C.; Pan, I.; Kline, T.L. Reproducible Artificial Intelligence Research Requires Open Communication of Complete Source Code. Radiol. Artif. Intell. 2020, 2, e200060. [Google Scholar] [CrossRef]
  168. Autret, D.; Guillerminet, C.; Roussel, A.; Cossec-Kerloc’h, E.; Dufreneix, S. Comparison of Four Synthetic CT Generators for Brain and Prostate MR-Only Workflow in Radiotherapy. Radiat. Oncol. 2023, 18, 146. [Google Scholar] [CrossRef]
Figure 1. Overview of UNet-based CT synthesis from CBCT. The blue box stands for convolutional blocks, and the arrow represents the data flow from the encoder to the de-coder (flow of features).
Figure 1. Overview of UNet-based CT synthesis from CBCT. The blue box stands for convolutional blocks, and the arrow represents the data flow from the encoder to the de-coder (flow of features).
Bioengineering 12 01297 g001
Figure 2. Overview of GAN-based CT synthesis from CBCT.
Figure 2. Overview of GAN-based CT synthesis from CBCT.
Bioengineering 12 01297 g002
Figure 3. Overview of Diffusion-based CT synthesis from CBCT.
Figure 3. Overview of Diffusion-based CT synthesis from CBCT.
Bioengineering 12 01297 g003
Table 2. Formulas of evaluation metrics for synthetic image analysis.
Table 2. Formulas of evaluation metrics for synthetic image analysis.
Type of MetricMetricsFormula
Intensity-basedMean Error M E = 1 n   i = 1 n ( s C T i C T i )
Mean Absolute Error M A E = 1 n   i = 1 n | s C T i C T i |
Mean Square Error M S E = 1 n   i = 1 n ( s C T i C T i ) 2
Root Mean Square Error R M S E = 1 n   i = 1 n ( s C T i C T i ) 2
Peak Signal to Noise Ratio P S N R = 10 l o g 10 ( 2 b 1 ) 2 M S E
Structural Similarity Index S S I M = ( 2 μ x μ y + C 1 ) ( 2 σ x y + C 2 ) ( μ x 2 + μ y 2 + C 1 ) ( σ x 2 + σ y 2 + C 2 )
Normalized Cross Correlation N C C = 1 n ( I s C T μ s C T ) ( I C T μ C T ) σ C T σ s C T
Geometric-basedDice Similarity Coefficient D S C = 2   ( V C T   V s C T ) V C T + V s C T
Hausdorff Distance H s C T , C T = m a x { sup x C CT inf y C sCT | | x y | | ,   sup x C sCT inf y C CT | | x y | | }
Mean Absolute Surface Distance M A S D C T , s C T = x C C T min y C sCT | | x y | | + y C s C T min x C CT | | y x | | C C T + C s C T
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guo, Y.; Luo, Y.; Hooshangnejad, H.; Zhang, R.; Feng, X.; Chen, Q.; Ngwa, W.; Ding, K. Deep Learning for CT Synthesis in Radiotherapy. Bioengineering 2025, 12, 1297. https://doi.org/10.3390/bioengineering12121297

AMA Style

Guo Y, Luo Y, Hooshangnejad H, Zhang R, Feng X, Chen Q, Ngwa W, Ding K. Deep Learning for CT Synthesis in Radiotherapy. Bioengineering. 2025; 12(12):1297. https://doi.org/10.3390/bioengineering12121297

Chicago/Turabian Style

Guo, Yike, Yi Luo, Hamed Hooshangnejad, Rui Zhang, Xue Feng, Quan Chen, Wilfred Ngwa, and Kai Ding. 2025. "Deep Learning for CT Synthesis in Radiotherapy" Bioengineering 12, no. 12: 1297. https://doi.org/10.3390/bioengineering12121297

APA Style

Guo, Y., Luo, Y., Hooshangnejad, H., Zhang, R., Feng, X., Chen, Q., Ngwa, W., & Ding, K. (2025). Deep Learning for CT Synthesis in Radiotherapy. Bioengineering, 12(12), 1297. https://doi.org/10.3390/bioengineering12121297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop