Next Article in Journal
Multi-Parameter Auto-Tuning Algorithm for Mass Spectrometer Based on Improved Particle Swarm Optimization
Next Article in Special Issue
Curriculum Consistency Learning and Multi-Scale Contrastive Constraint in Semi-Supervised Medical Image Segmentation
Previous Article in Journal
The Performance of a Deep Learning-Based Automatic Measurement Model for Measuring the Cardiothoracic Ratio on Chest Radiographs
Previous Article in Special Issue
Emerging Trends in Fast MRI Using Deep-Learning Reconstruction on Undersampled k-Space Data: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Machine Learning for Medical Image Translation: A Systematic Review

1
Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand
2
Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
3
Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
4
Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
5
Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
*
Author to whom correspondence should be addressed.
Bioengineering 2023, 10(9), 1078; https://doi.org/10.3390/bioengineering10091078
Submission received: 19 June 2023 / Revised: 30 July 2023 / Accepted: 7 September 2023 / Published: 12 September 2023
(This article belongs to the Special Issue Machine-Learning-Driven Medical Image Analysis)

Abstract

:
Background: CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. Methods: A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. Results: A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. Conclusions: Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.

1. Introduction

Medical imaging is a routine part of the diagnosis and treatment of a variety of medical conditions. Due to limitations, including the acquisition time of imaging methods and the cost of obtaining medical images, patients may not receive all the imaging modalities that they could benefit from. A possible solution to this is to use deep learning methods to generate synthetic medical images which estimate these modalities from scans the patient did receive.
For example, the diagnosis of brain disorders is often informed by brain scans obtained from the patient. The purpose of such neuroimaging is to rule out or diagnose a variety of conditions caused by lesions in the central nervous system. The most widely used imaging modalities for this purpose are magnetic resonance imaging (MRI) and computerized tomography (CT). MRI is much more sensitive to conditions such as stroke, offering better contrast of soft tissues and excellent anatomical detail in comparison to CT scans; however, MRIs tend to take longer, and be less available and more expensive [1]. MRI is also not appropriate for patients with metal implants or claustrophobia. Due to these limitations, CT scans tend to be the first and often only scan a patient receives. Furthermore, compared to CT scans, MRIs provide a more accurate registration to most commonly used brain atlases. Benefitting from the advantages provided by MRI by synthesizing an MRI from a patient’s CT scan would therefore improve the treatment of patients presenting with brain disorders.
Deep learning can be utilized to generate images and therefore be applied to this problem. A limitation in using deep learning for medical imaging tasks is the availability of large datasets, a distinguishing factor in terms of what types of deep learning frameworks are suitable. Two commonly used frameworks for image synthesis are generative adversarial networks (GANs) and convolutional neural networks (CNNs). A GAN is a framework that consists of two models—a generator and discriminator—which are simultaneously trained [2]. The generator captures the data distribution of the training data and attempts to generate data which fits within this distribution, whilst the discriminator is presented with one piece of data and estimates whether it was generated by the generator. The generator and discriminator then engage in a two-player game, trying to become better at their respective tasks. A CNN is a framework that processes pixel data, and which is often used to detect objects in images [3]. In a medical context, one of the most widely used CNNs is U-Net, which is most commonly used for segmentation tasks [4].
A variety of evaluation metrics are used to assess the performance of deep learning models for medical image synthesis. Many of the metrics used to assess the performance of medical image synthesis models are the same as those used in general image synthesis tasks. Metrics assess the difference between two images—the one generated by the model and the ground truth image. Commonly used metrics include the mean error (ME), mean absolute error (MAE), and mean squared error (MSE) which compare pixel intensities.
The purpose of this study is to review the work that has been carried out on medical image synthesis. In medical settings, there is a shortage of large datasets suitable for supervised learning, so this review will consider studies which use supervised learning, unsupervised learning, or both.

2. Methodology

2.1. Search Strategy

This search was completed using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The focus was establishing what work had been carried out in terms of developing machine learning models which can translate medical images into different modalities. Therefore, the machine learning frameworks used and dataset details, including body parts studied and modalities studied, were variables of interest. Articles were included for this review if they conducted original research using machine learning methods to translate medical images from one modality into a different modality. Keywords were developed in three categories—machine learning, image generation, and medical imaging—to address these criteria. The keywords in each category are shown in Table 1.

2.2. Screening Process

Articles published in journals up until July 2023 (inclusive) were searched using PubMed. Additionally, relevant preprints were identified using ArXiv. Search queries were developed by combining keywords from the same category with the OR operator and combining the categories using the AND operator. The first screening phase involved screening articles based on their title and abstract to remove articles which were not relevant to the scope of the review or due to the exclusion criteria. In the final screening phase, papers were assessed based on the full text. Included papers then underwent data extraction. Reasons for exclusion included papers which were not written in English; were not able to be accessed; did not comprehensively describe an original study; were theses, reviews, or notes; did not give sufficient details on the dataset used to train the model; focused on reconstructing medical images to improve resolution; or focused on translating images into the same modality but with different acquisition parameters.

2.3. Data Extraction

From the included papers, the title, author details, year of publication, dataset size, part of the body the dataset contained, input modality, output modality, motivations stated for medical image synthesis, machine learning methods, and evaluation methods were extracted. Categories were developed to allow the included papers to be grouped based on the extracted data, and the categories are described in Table 2. Data extraction was performed on two separate occasions and compared to decrease the chance of human error. Studies in the same article were counted separately if they used different datasets for training or synthesized different modalities.

3. Results

Figure 1 shows the PRISMA flowchart for this review. A total of 392 articles were identified from PubMed, and 297 articles were identified from ArXiv. A further 15 articles which had already been identified as relevant were included from various sources. After title and abstract screening, 138 papers remained, and after screening of the full text, 99 articles were included, which documented 103 studies (Table 3).

3.1. Modalities Synthesized

Figure 2 shows the breakdown of the types of synthesis in the included studies. Most studies (76) investigated MRI to CT synthesis, with the majority of these being motivated by MRI-only radiation therapy. Thirteen studies investigated Cross-MRI synthesis, which included T1 to T2 and T2 to FLAIR; often, these studies used a dataset with more than two MRI modalities and performed synthesis between many of the different modalities. All Cross-MRI synthesis studies used datasets of the brain. Eleven of the studies investigated CT to MRI synthesis, three studies investigated MRI to PET synthesis, and one study investigated PET to CT synthesis.

3.2. Year of Publication

Although no restriction was placed on the year of publication in the literature search, all included papers were published since 2017 (Figure 3). Between 2017 and 2021, the number of papers published appears to grow exponentially, with a drop from 31 studies in 2021 to 24 studies in 2022. Nine studies were from 2023, however, the literature search only included papers until July 2023.

3.3. Evaluation

A total of 36 different methods were used to evaluate model performance (Figure 4). MAE (mean absolute error), PSNR (peak signal-to-noise ratio), and SSIM (structural similarity index) were the three most used evaluation metrics. It was common for studies motivated by MRI-only radiation therapy to use dosimetric evaluation; this was present in 27 studies. Dosimetric evaluation compared the radiation dosage plan based off the synthetic CT to that which the patient received based on the true CT.

3.4. Motivations

There were multiple motivations mentioned across the surveyed studies (Figure 5). The most common motivation was to achieve MRI-only radiation therapy, which was a motivation for 60 studies—these studies all synthesized CTs from MRIs. Fourteen studies were motivated by synthesizing unobtained scans to aid diagnosis. Eight studies were motivated by increasing the size of paired datasets by synthesizing missing modalities.

3.5. Deep Learning Used

GANs were the main type of deep learning algorithm used, with 72% of studies incorporating a GAN and 48% studies incorporating a CNN (Figure 6).

3.6. Dataset Sizes

The number of subjects in the dataset had a mean of 91 and median of 39 (Figure 7). Some of the studies with smaller datasets used the leave-one-out method where the model is trained on all the data but one instance and then tested on the one instance that is left out. This is then repeated, leaving each piece of data out in turn. The mean number of patient in the dataset for cross-MRI synthesis was 274, much larger than the means for MRI to CT (56) and CT to MRI (134).

4. Discussion

This systematic review analyzed the current state of medical image synthesis using deep learning. The year of publication; type of synthesis; machine learning framework; dataset size; motivation; and evaluation methods used were analyzed.
The most common synthesis was MRI to CT synthesis, and almost every study performing this synthesis was motivated by MRI-only radiation therapy. The benefits of MRI-only radiotherapy are that the patient does not have to be exposed to the radiation of the CT scan, and that time and money are saved. Other motivations included turning datasets of MRIs into paired MRI/sCT datasets and completing datasets by synthesizing missing CTs. Minimal research has been conducted on MRI synthesis from CT scans. Since CTs are often the first or only scans taken for neurological issues, the time advantage and additional information from CT-synthesized MRI would be clinically beneficial. MRI gives superior tissue contrast for the diagnosis of several brain diseases and disorders, such as stroke and traumatic brain injury. CT-synthesized MRI could improve the speed and quality of treatment for stroke patients and provide a solution for the cross-modality registration problem in the context of comparing patients’ CT scans to MRI brain atlases. Depending on the training dataset, the generation of T1, T2 weighted, or even FLAIR images from CT could be investigated. These different types of MR modalities provide complementary information which can be utilized for diagnostic purposes and for registration to different brain atlases. Eleven papers [18,19,20,21,22,23,24,25,26,27,28] studied MRI synthesis from CT which demonstrates a knowledge gap in this area.
The lack of paired MRI/CT datasets is a significant problem that inhibits the use of supervised learning for cross-modality synthesis. It is therefore suggested that future studies investigate whether within-modality synthesis models could be used to generate paired datasets. Paired MRI/CT datasets are useful for a variety of applications, including training models for cross-modality synthesis and training models to perform other tasks that require paired data.
In part due to the lack of consensus on which metrics to use for evaluation, there does not appear to be a consensus on the level of accuracy required for synthetic medical images. The quality of the generated images in some publications is an area of particular concern, as some models output blurry images which mask the details of smaller-scale features. A benchmark image quality for the models for use in a clinical setting is much needed. This task will be hampered, however, by different motivations, since different studies may require different levels of accuracy and image quality. Research that helps provide a consensus or that gives guidance on the best evaluation methods is warranted to improve the progress towards clinically useful synthesized medical images.
There were a range of research motivations across the different studies; however, most papers did not mention more than one of these. The motivations for MRI synthesis from CT were quite different to the motivations for CT synthesis from MRI. A focus for future research should be establishing how different motivations for medical image synthesis affect how the synthesized images should be assessed and evaluated. This would help establish which methods perform best for medical image generation in different contexts. The motivations of the studies strongly affected the methods of evaluation used. A common evaluation method for the CTs generated from MRI for the purpose of MRI-only radiotherapy was dosimetric evaluation, which does not make sense for other types of synthesis. Research investigating clinical uses for synthetic medical images would therefore be significant.
The studies reviewed did not provide much insight into how different machine learning frameworks compare for medical image translation. The research has instead been focused on demonstrating that synthesizing medical images with deep learning is feasible. Studies used GANs and CNNs, but no particular focus was put on finding out which of these frameworks is more suited to the problem. Many of the papers used GANs, and a selection of these introduced novel contributions to the GAN model that they implemented to improve image synthesis. A much smaller selection of the papers used CNNs, and most of these did not implement novel features to adapt the models for this type of synthesis. It is recommended research be carried out on how CNNs can be adapted for this type of synthesis.
GANs are renowned for image generation, and this is presumably why they have been used so often in this area. The reason they are so popular for image generation is because they produce high-quality images due to matching the training distribution. With a dataset of medical images, the distribution statistics will be affected by the percentage of scans with artifacts such as lesions. This leads to the possibility of hallucinating or erasing lesions or other artifacts. Even in the case of supervised models such as Pix2Pix, the models still fit to the distribution of the training data [104]. CNNs only fit to the one-to-one pairings between the paired data input. This means they require a lot more data than GANs for stable training, however, this ensures the model learns the relationship between the input and output modalities. The papers including CNNs mostly used UNet and variations of UNet. Despite UNet being normally used for segmentation, a model of this architecture has proved to work well for image synthesis. A few papers did compare GANs against CNNs, however, no consistent consensus on their relative performance was found.
More studies are required to determine which deep learning architectures and implementations work best for medical image synthesis. To assist the development of this area, it is recommended that future research test and compare different methods of evaluating synthesized medical images, in order to determine the level of accuracies required for the synthesized images to be clinically useful in different contexts. Finally, it is recommended that the feasibility of a model generating pairs of synthetic CTs and synthetic MRIs be investigated. This has not been previously done and would have helpful implications for using deep learning for synthesis, segmentation, and a variety of other clinical tasks if feasible. Lack of available large medical datasets is an ongoing issue; it is therefore recommended that a global consortium be established to collate currently available datasets and coordinate with researchers and medical professionals to encourage ongoing collaboration.

5. Conclusions

In conclusion, this systematic review has revealed a knowledge gap within the field of medical image synthesis. Specifically, very limited research has been conducted on synthesizing MRIs from CT scans, despite a variety of motivations. Since MRIs give superior tissue contrast and are preferred for the diagnosis of several brain diseases and disorders, synthesis of such data from CTs (which are more commonly obtained) would be clinically beneficial. All studies reviewed on medical image translation have been published since 2017, making this a relatively new area—as such, there is little consensus around methods of assessing and testing the performance of models for this task. We therefore recommend that more research be conducted into MRI synthesis from CT scans. Current advances in deep learning have shown clinical utility for stroke and traumatic brain injury patients, making this approach promising as a candidate for solving the cross-modality registration problem. Recommendations were given for the directions of future research in this field, including on a related application (not yet discussed in the literature) of using image synthesis techniques to generate pairwise datasets. It was concluded that more research is required to determine which deep learning methods are most effective and accurate in synthesizing medical images for use in a clinical setting.

Author Contributions

Conceptualization: A.W., Methodology: A.W. and J.M., Formal analysis: J.M., Writing—original draft: J.M., Writing—review and editing: A.W., S.H., B.C., V.S. and J.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by the Health Research Council of New Zealand [grant number 21/144].

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

MAEMean absolute error
PSNRPeak signal-to-noise ratio
SSIMStructural similarity index measure
MEMean error
MSEMean squared error
DSCDice score
PCCPearson correlation coefficient
FIDFréchet inception distance
RMSERoot mean squared error
NCCNormalized cross correlation
MAPEMean absolute percentage error
VIFVisual information fidelity
BDBjøntegaard-Delta
HDHausdorff distance
HFENHigh-frequency error norm
MIMutual information
NRMSENormalized root mean squared error
RSMPERoot mean squared percentage error
SDSharpness difference
SLPDSum of local phase differences
SWDSliced Wasserstein discrepancy
NMSENormalized mean squared error

References

  1. Yew, K.S.; Cheng, E. Acute stroke diagnosis. Am. Fam. Physician 2009, 80, 33–40. [Google Scholar]
  2. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; MIT Press: Cambridge, MA, USA, 2014; Volume 2, pp. 2672–2680. [Google Scholar]
  3. Yamashita, R.; Nishio, M.; Do, R.K.G.; Togashi, K. Convolutional neural networks: An overview and application in radiology. Insights Into Imaging 2018, 9, 611–629. [Google Scholar] [CrossRef]
  4. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Munich, Germany, 5–9 October 2015; Springer: Cham, Switzerland, 2015. [Google Scholar]
  5. Yu, B.; Wang, Y.; Wang, L.; Shen, D.; Zhou, L. Medical Image Synthesis via Deep Learning. In Deep Learning in Medical Image Analysis: Challenges and Applications; Lee, G., Fujita, H., Eds.; Springer: Cham, Switzerland, 2020; pp. 23–44. [Google Scholar]
  6. Li, J.; Qu, Z.; Yang, Y.; Zhang, F.; Li, M.; Hu, S. TCGAN: A transformer-enhanced GAN for PET synthetic CT. Biomed. Opt. Express 2022, 13, 6003–6018. [Google Scholar] [CrossRef]
  7. Fujita, S.; Hagiwara, A.; Otsuka, Y.B.; Hori, M.; Takei, N.; Hwang, K.-P.; Irie, R.; Andica, C.; Kamagata, K.; Akashi, T.; et al. Deep Learning Approach for Generating MRA Images From 3D Quantitative Synthetic MRI Without Additional Scans. Investig. Radiol. 2020, 55, 249–256. [Google Scholar] [CrossRef]
  8. Pal, S.; Dutta, S.; Maitra, R. Personalized synthetic MR imaging with deep learning enhancements. Magn. Reson. Med. 2022, 89, 1634–1643. [Google Scholar] [CrossRef]
  9. Schilling, L. Generating Synthetic Brain MR Images Using a Hybrid Combination of Noise-to-Image and Image-to-Image GANs. Master’s Thesis, Linköping University, Linköping, Sweden, 2020; p. 90. [Google Scholar]
  10. Uzunova, H.; Ehrhardt, J.; Handels, H. Memory-efficient GAN-based domain translation of high resolution 3D medical images. Comput. Med. Imaging Graph. 2020, 86, 101801. [Google Scholar] [CrossRef]
  11. Kaplan, S.; Perrone, A.; Alexopoulos, D.; Kenley, J.K.; Barch, D.M.; Buss, C.; Elison, J.T.; Graham, A.M.; Neil, J.J.; O’Connor, T.G.; et al. Synthesizing pseudo-T2w images to recapture missing data in neonatal neuroimaging with applications in rs-fMRI. Neuroimage 2022, 253, 119091. [Google Scholar] [CrossRef]
  12. Nencka, A.S.; Klein, A.; Koch, K.M.; McGarry, S.D.; LaViolette, P.S.; Paulson, E.S.; Mickevicius, N.J.; Muftuler, L.T.; Swearingen, B.; McCrea, M.A. Build-A-FLAIR: Synthetic T2-FLAIR Contrast Generation through Physics Informed Deep Learning. arXiv 2019, arXiv:1901.04871. [Google Scholar]
  13. Zhu, L.; Xue, Z.; Jin, Z.; Liu, X.; He, J.; Liu, Z.; Yu, L. Make-A-Volume: Leveraging Latent Diffusion Models for Cross-Modality 3D Brain MRI Synthesis. arXiv 2023, arXiv:2307.10094. [Google Scholar]
  14. Shin, H.; Kim, H.; Kim, S.; Jun, Y.; Eo, T.; Hwang, D. COSMOS: Cross-modality unsupervised domain adaptation for 3D medical image segmentation based on target-aware domain translation and iterative self-training. arXiv 2022, arXiv:2203.16557. [Google Scholar]
  15. Raju, J.C.; Gayatri, K.S.; Ram, K.; Rangasami, R.; Ramachandran, R.; Sivaprakasam, M. MIST GAN: Modality Imputation Using Style Transfer for MRI. In Machine Learning in Medical Imaging; Springer: Cham, Switzerland, 2021. [Google Scholar]
  16. Chen, Y.; Staring, M.; Wolterink, J.M.; Tao, Q. Local Implicit Neural Representations for Multi-Sequence MRI Translation. arXiv 2023, arXiv:2302.01031. [Google Scholar]
  17. Moya-Sáez, E.; Navarro-González, R.; Cepeda, S.; Pérez-Núñez, A.; de Luis-García, R.; Aja-Fernández, S.; Alberola-López, C. Synthetic MRI improves radiomics-based glioblastoma survival prediction. NMR Biomed. 2022, 35, e4754. [Google Scholar] [CrossRef]
  18. Hong, K.-T.; Cho, Y.; Kang, C.H.; Ahn, K.-S.; Lee, H.; Kim, J.; Hong, S.J.; Kim, B.H.; Shim, E. Lumbar Spine Computed Tomography to Magnetic Resonance Imaging Synthesis Using Generative Adversarial Network: Visual Turing Test. Diagnostics 2022, 12, 530. [Google Scholar] [CrossRef]
  19. Li, Y.; Li, W.; Xiong, J.; Xia, J.; Xie, Y. Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images. BioMed Res. Int. 2020, 2020, 1–9. [Google Scholar] [CrossRef]
  20. Kalantar, R.; Messiou, C.; Winfield, J.M.; Renn, A.; Latifoltojar, A.; Downey, K.; Sohaib, A.; Lalondrelle, S.; Koh, D.-M.; Blackledge, M.D. CT-Based Pelvic T1-Weighted MR Image Synthesis Using UNet, UNet++ and Cycle-Consistent Generative Adversarial Network (Cycle-GAN). Front. Oncol. 2021, 11, 665807. [Google Scholar] [CrossRef]
  21. Kieselmann, J.P.; Fuller, C.D.; Gurney-Champion, O.J.; Oelfke, U. Cross-modality deep learning: Contouring of MRI data from annotated CT data only. Med. Phys. 2021, 48, 1673–1684. [Google Scholar] [CrossRef]
  22. Li, W.; Li, Y.; Qin, W.; Liang, X.; Xu, J.; Xiong, J.; Xie, Y. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant. Imaging Med. Surg. 2020, 10, 1223–1236. [Google Scholar] [CrossRef]
  23. Dong, X.; Lei, Y.; Tian, S.; Wang, T.; Patel, P.; Curran, W.J.; Jani, A.B.; Liu, T.; Yang, X. Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network. Radiother. Oncol. 2019, 141, 192–199. [Google Scholar] [CrossRef]
  24. Dai, X.; Lei, Y.; Wang, T.; Zhou, J.; Roper, J.; McDonald, M.; Beitler, J.J.; Curran, W.J.; Liu, T.; Yang, X. Automated delineation of head and neck organs at risk using synthetic MRI-aided mask scoring regional convolutional neural network. Med. Phys. 2021, 48, 5862–5873. [Google Scholar] [CrossRef]
  25. McNaughton, J.; Holdsworth, S.; Chong, B.; Fernandez, J.; Shim, V.; Wang, A. Synthetic MRI Generation from CT Scans for Stroke Patients. BioMedInformatics 2023, 3, 791–816. [Google Scholar] [CrossRef]
  26. Rubin, J.; Abulnaga, S.M. CT-To-MR Conditional Generative Adversarial Networks for Ischemic Stroke Lesion Segmentation. In Proceedings of the 2019 IEEE International Conference on Healthcare Informatics, Xi’an, China, 10–13 June 2019; pp. 1–7. [Google Scholar]
  27. Feng, E.; Qin, P.; Chai, R.; Zeng, J.; Wang, Q.; Meng, Y.; Wang, P. MRI Generated From CT for Acute Ischemic Stroke Combining Radiomics and Generative Adversarial Networks. IEEE J. Biomed. Health Inform. 2022, 26, 6047–6057. [Google Scholar] [CrossRef]
  28. Paavilainen, P.; Akram, S.U.; Kannala, J. Bridging the gap between paired and unpaired medical image translation. In Proceedings of the MICCAI Workshop on Deep Generative Models, Strasbourg, France, 1 October 2021; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
  29. Ahangari, S.; Olin, A.B.; Federspiel, M.K.; Jakoby, B.; Andersen, T.L.; Hansen, A.E.; Fischer, B.M.; Andersen, F.L. A deep learning-based whole-body solution for PET/MRI attenuation correction. EJNMMI Phys. 2022, 9, 55. [Google Scholar] [CrossRef]
  30. Morbée, L.; Chen, M.; Herregods, N.; Pullens, P.; Jans, L.B. MRI-based synthetic CT of the lumbar spine: Geometric measurements for surgery planning in comparison with CT. Eur. J. Radiol. 2021, 144, 109999. [Google Scholar] [CrossRef]
  31. Morbee, L.; Chen, M.; Van Den Berghe, T.; Schiettecatte, E.; Gosselin, R.; Herregods, N.; Jans, L.B.O. MRI-based synthetic CT of the hip: Can it be an alternative to conventional CT in the evaluation of osseous morphology? Eur. Radiol. 2022, 32, 3112–3120. [Google Scholar] [CrossRef]
  32. Jans, L.; Chen, M.; Elewaut, D.; Van den Bosch, F.; Carron, P.; Jacques, P.; Wittoek, R.; Jaremko, J.; Herregods, N. MRI-Based Synthetic CT in the Detection of Structural Lesions in Patients with Suspected Sacroiliitis: Comparison with MRI. Radiology 2021, 298, 343–349. [Google Scholar] [CrossRef]
  33. Florkow, M.C.; Willemsen, K.; Zijlstra, F.; Foppen, W.; Wal, B.C.H.; van Zyp, J.R.N.V.; Viergever, M.A.; Castelein, R.M.; Weinans, H.; Stralen, M.; et al. MRI-based synthetic CT shows equivalence to conventional CT for the morphological assessment of the hip joint. J. Orthop. Res. 2022, 40, 954–964. [Google Scholar] [CrossRef]
  34. Arbabi, S.; Foppen, W.; Gielis, W.P.; van Stralen, M.; Jansen, M.; Arbabi, V.; de Jong, P.A.; Weinans, H.; Seevinck, P. MRI-based synthetic CT in the detection of knee osteoarthritis: Comparison with CT. J. Orthop. Res. 2023, 1–10. [Google Scholar] [CrossRef]
  35. Zhao, S.; Geng, C.; Guo, C.; Tian, F.; Tang, X. SARU: A self-attention ResUNet to generate synthetic CT images for MR-only BNCT treatment planning. Med. Phys. 2023, 50, 117–127. [Google Scholar] [CrossRef]
  36. Kazemifar, S.; Montero, A.M.B.; Souris, K.; Rivas, S.T.; Timmerman, R.; Park, Y.K.; Jiang, S.; Geets, X.; Sterpin, E.; Owrangi, A. Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors. J. Appl. Clin. Med. Phys. 2020, 21, 76–86. [Google Scholar] [CrossRef]
  37. Zimmermann, L.; Knäusl, B.; Stock, M.; Lütgendorf-Caucig, C.; Georg, D.; Kuess, P. An MRI sequence independent convolutional neural network for synthetic head CT generation in proton therapy. Z. Med. Phys. 2022, 32, 218–227. [Google Scholar] [CrossRef]
  38. Maspero, M.; Bentvelzen, L.G.; Savenije, M.H.; Guerreiro, F.; Seravalli, E.; Janssens, G.O.; Berg, C.A.v.D.; Philippens, M.E. Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy. Radiother. Oncol. 2020, 153, 197–204. [Google Scholar] [CrossRef] [PubMed]
  39. Chen, S.; Peng, Y.; Qin, A.; Liu, Y.; Zhao, C.; Deng, X.; Deraniyagala, R.; Stevens, C.; Ding, X. MR-based synthetic CT image for intensity-modulated proton treatment planning of nasopharyngeal carcinoma patients. Acta Oncol. 2022, 61, 1417–1424. [Google Scholar] [CrossRef] [PubMed]
  40. Liu, Y.; Lei, Y.; Wang, Y.; Wang, T.; Ren, L.; Lin, L.; McDonald, M.; Curran, W.J.; Liu, T.; Zhou, J.; et al. MRI-based treatment planning for proton radiotherapy: Dosimetric validation of a deep learning-based liver synthetic CT generation method. Phys. Med. Biol. 2019, 64, 145015. [Google Scholar] [CrossRef] [PubMed]
  41. Touati, R.; Le, W.T.; Kadoury, S. A feature invariant generative adversarial network for head and neck MRI/CT image synthesis. Phys. Med. Biol. 2021, 66, 095001. [Google Scholar] [CrossRef]
  42. Bahrami, A.; Karimian, A.; Fatemizadeh, E.; Arabi, H.; Zaidi, H. A new deep convolutional neural network design with efficient learning capability: Application to CT image synthesis from MRI. Med. Phys. 2020, 47, 5158–5171. [Google Scholar] [CrossRef]
  43. Bahrami, A.; Karimian, A.; Arabi, H. Comparison of different deep learning architectures for synthetic CT generation from MR images. Phys. Med. 2021, 90, 99–107. [Google Scholar] [CrossRef] [PubMed]
  44. Liu, Y.; Chen, A.; Shi, H.; Huang, S.; Zheng, W.; Liu, Z.; Zhang, Q.; Yang, X. CT synthesis from MRI using multi-cycle GAN for head-and-neck radiation therapy. Computerized medical imaging and graphics. Off. J. Comput. Med. Imaging Soc. 2021, 91, 101953. [Google Scholar]
  45. Yoo, G.S.; Luu, H.M.; Kim, H.; Park, W.; Pyo, H.; Han, Y.; Park, J.Y.; Park, S.-H. Feasibility of Synthetic Computed Tomography Images Generated from Magnetic Resonance Imaging Scans Using Various Deep Learning Methods in the Planning of Radiation Therapy for Prostate Cancer. Cancers 2021, 14, 40. [Google Scholar] [CrossRef]
  46. Ranjan, A.; Lalwani, D.; Misra, R. GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment. Magn. Reson. Mater. Phys. Biol. Med. 2021, 35, 449–457. [Google Scholar] [CrossRef]
  47. Liu, Y.; Lei, Y.; Wang, T.; Kayode, O.; Tian, S.; Liu, T.; Patel, P.; Curran, W.J.; Ren, L.; Yang, X. MRI-based treatment planning for liver stereotactic body radiotherapy: Validation of a deep learning-based synthetic CT generation method. Br. J. Radiol. 2019, 92, 20190067. [Google Scholar] [CrossRef]
  48. Kazemifar, S.; McGuire, S.; Timmerman, R.; Wardak, Z.; Nguyen, D.; Park, Y.; Jiang, S.; Owrangi, A. MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach. Radiother. Oncol. 2019, 136, 56–63. [Google Scholar] [CrossRef] [PubMed]
  49. Olin, A.B.; Thomas, C.; Hansen, A.E.; Rasmussen, J.H.; Krokos, G.; Urbano, T.G.; Michaelidou, A.; Jakoby, B.; Ladefoged, C.N.; Berthelsen, A.K.; et al. Robustness and Generalizability of Deep Learning Synthetic Computed Tomography for Positron Emission Tomography/Magnetic Resonance Imaging–Based Radiation Therapy Planning of Patients With Head and Neck Cancer. Adv. Radiat. Oncol. 2021, 6, 100762. [Google Scholar] [CrossRef] [PubMed]
  50. Hernandez, A.G.; Fau, P.; Wojak, J.; Mailleux, H.; Benkreira, M.; Rapacchi, S.; Adel, M. Synthetic computed tomography generation for abdominal adaptive radiotherapy using low-field magnetic resonance imaging. Phys. Imaging Radiat. Oncol. 2023, 25, 100425. [Google Scholar] [CrossRef] [PubMed]
  51. Dinkla, A.; Florkow, M.; Maspero, M.; Savenije, M.; Zijlstra, F.; Doornaert, P.; van Stralen, M.; Philippens, M.; van den Berg, C.; Seevinck, P. Dosimetric Evaluation of Synthetic CT for Head and Neck Radiotherapy Generated by a Patch-Based Three-Dimensional Convolutional Neural Network. Med. Phys. 2019, 46, 4095–4104. [Google Scholar] [PubMed]
  52. Tang, B.; Wu, F.; Fu, Y.; Wang, X.; Wang, P.; Orlandini, L.C.; Li, J.; Hou, Q. Dosimetric evaluation of synthetic CT image generated using a neural network for MR-only brain radiotherapy. J. Appl. Clin. Med. Phys. 2021, 22, 55–62. [Google Scholar] [CrossRef]
  53. Cusumano, D.; Lenkowicz, J.; Votta, C.; Boldrini, L.; Placidi, L.; Catucci, F.; Dinapoli, N.; Antonelli, M.V.; Romano, A.; De Luca, V.; et al. A deep learning approach to generate synthetic CT in low field MR-guided adaptive radiotherapy for abdominal and pelvic cases. Radiother. Oncol. 2020, 153, 205–212. [Google Scholar] [CrossRef]
  54. Gupta, D.; Kim, M.; Vineberg, K.A.; Balter, J.M. Generation of Synthetic CT Images From MRI for Treatment Planning and Patient Positioning Using a 3-Channel U-Net Trained on Sagittal Images. Front. Oncol. 2019, 9, 964. [Google Scholar] [CrossRef]
  55. Parrella, G.; Vai, A.; Nakas, A.; Garau, N.; Meschini, G.; Camagni, F.; Baroni, G. Synthetic CT in Carbon Ion Radiotherapy of the Abdominal Site. Bioengineering 2023, 10, 250. [Google Scholar] [CrossRef]
  56. Chourak, H.; Barateau, A.; Tahri, S.; Cadin, C.; Lafond, C.; Nunes, J.-C.; Boue-Rafle, A.; Perazzi, M.; Greer, P.B.; Dowling, J.; et al. Quality assurance for MRI-only radiation therapy: A voxel-wise population-based methodology for image and dose assessment of synthetic CT generation methods. Front. Oncol. 2022, 12, 968689. [Google Scholar] [CrossRef]
  57. Fu, J.; Singhrao, K.; Cao, M.; Yu, V.Y.; Santhanam, A.P.; Yang, Y.; Guo, M.; Raldow, A.C.; Ruan, D.; Lewis, J.H. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed. Phys. Eng. Express 2020, 6, 015033. [Google Scholar] [CrossRef]
  58. Lenkowicz, J.; Votta, C.; Nardini, M.; Quaranta, F.; Catucci, F.; Boldrini, L.; Vagni, M.; Menna, S.; Placidi, L.; Romano, A.; et al. A deep learning approach to generate synthetic CT in low field MR-guided radiotherapy for lung cases. Radiother. Oncol. 2022, 176, 31–38. [Google Scholar] [CrossRef]
  59. Wang, J.; Yan, B.; Wu, X.; Jiang, X.; Zuo, Y.; Yang, Y. Development of an unsupervised cycle contrastive unpaired translation network for MRI-to-CT synthesis. J. Appl. Clin. Med. Phys. 2022, 23, e13775. [Google Scholar] [CrossRef]
  60. Yuan, J.; Fredman, E.; Jin, J.-Y.; Choi, S.; Mansur, D.; Sloan, A.; Machtay, M.; Zheng, Y. Monte Carlo Dose Calculation Using MRI Based Synthetic CT Generated by Fully Convolutional Neural Network for Gamma Knife Radiosurgery. Technol. Cancer Res. Treat. 2021, 20, 15330338211046433. [Google Scholar] [CrossRef] [PubMed]
  61. Boni, K.N.D.B.; Klein, J.; Gulyban, A.; Reynaert, N.; Pasquier, D. Improving generalization in MR-to-CT synthesis in radiotherapy by using an augmented cycle generative adversarial network with unpaired data. Med. Phys. 2021, 48, 3003–3010. [Google Scholar] [CrossRef] [PubMed]
  62. Boni, K.N.D.B.; Klein, J.; Vanquin, L.; Wagner, A.; Lacornerie, T.; Pasquier, D.; Reynaert, N. MR to CT synthesis with multicenter data in the pelvic area using a conditional generative adversarial network. Phys. Med. Biol. 2020, 65, 075002. [Google Scholar] [CrossRef] [PubMed]
  63. Liu, L.; Johansson, A.; Cao, Y.; Dow, J.; Lawrence, T.S.; Balter, J.M. Abdominal synthetic CT generation from MR Dixon images using a U-net trained with ‘semi-synthetic’ CT data. Phys. Med. Biol. 2020, 65, 125001. [Google Scholar] [CrossRef]
  64. Song, L.; Li, Y.; Dong, G.; Lambo, R.; Qin, W.; Wang, Y.; Zhang, G.; Liu, J.; Xie, Y. Artificial intelligence-based bone-enhanced magnetic resonance image—A computed tomography/magnetic resonance image composite image modality in nasopharyngeal carcinoma radiotherapy. Quant. Imaging Med. Surg. 2021, 11, 4709–4720. [Google Scholar] [CrossRef] [PubMed]
  65. O’Connor, L.M.; Choi, J.H.; Dowling, J.A.; Warren-Forward, H.; Martin, J.; Greer, P.B. Comparison of Synthetic Computed Tomography Generation Methods, Incorporating Male and Female Anatomical Differences, for Magnetic Resonance Imaging-Only Definitive Pelvic Radiotherapy. Front. Oncol. 2022, 12, 822687. [Google Scholar] [CrossRef]
  66. Lerner, M.; Medin, J.; Gustafsson, C.J.; Alkner, S.; Siversson, C.; Olsson, L.E. Clinical validation of a commercially available deep learning software for synthetic CT generation for brain. Radiat. Oncol. 2021, 16, 66. [Google Scholar] [CrossRef]
  67. Lerner, M.; Medin, J.; Gustafsson, C.J.; Alkner, S.; Olsson, L.E. Prospective Clinical Feasibility Study for MRI-Only Brain Radiotherapy. Front. Oncol. 2021, 11, 812643. [Google Scholar] [CrossRef]
  68. Maspero, M.; Savenije, M.H.F.; Dinkla, A.M.; Seevinck, P.R.; Intven, M.P.W.; Juergenliemk-Schulz, I.M.; Kerkmeijer, L.G.W.; Berg, C.A.T.v.D. Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy. Phys. Med. Biol. 2018, 63, 185001. [Google Scholar] [CrossRef] [PubMed]
  69. Qi, M.; Li, Y.; Wu, A.; Jia, Q.; Li, B.; Sun, W.; Dai, Z.; Lu, X.; Zhou, L.; Deng, X.; et al. Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy. Med. Phys. 2020, 47, 1880–1894. [Google Scholar] [CrossRef] [PubMed]
  70. Florkow, M.C.; Zijlstra, F.; Willemsen, K.; Maspero, M.; van den Berg, C.A.T.; Kerkmeijer, L.G.W.; Castelein, R.M.; Weinans, H.; Viergever, M.A.; van Stralen, M.; et al. Deep learning-based MR-to-CT synthesis: The influence of varying gradient echo-based MR images as input channels. Magn. Reson. Med. 2020, 83, 1429–1441. [Google Scholar] [CrossRef] [PubMed]
  71. Farjam, R.; Nagar, H.; Kathy Zhou, X.; Ouellette, D.; Chiara Formenti, S.; DeWyngaert, J.K. Deep learning-based synthetic CT generation for MR-only radiotherapy of prostate cancer patients with 0.35T MRI linear accelerator. J. Appl. Clin. Med. Phys. 2021, 22, 93–104. [Google Scholar] [CrossRef]
  72. Olberg, S.; Zhang, H.; Kennedy, W.R.; Chun, J.; Rodriguez, V.; Zoberi, I.; Thomas, M.A.; Kim, J.S.; Mutic, S.; Green, O.L.; et al. Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy. Med. Phys. 2019, 46, 4135–4147. [Google Scholar] [CrossRef]
  73. Hsu, S.-H.; Han, Z.; Leeman, J.E.; Hu, Y.-H.; Mak, R.H.; Sudhyadhom, A. Synthetic CT generation for MRI-guided adaptive radiotherapy in prostate cancer. Front. Oncol. 2022, 12, 969463. [Google Scholar] [CrossRef]
  74. Park, S.H.; Choi, D.M.; Jung, I.-H.; Chang, K.W.; Kim, M.J.; Jung, H.H.; Chang, J.W.; Kim, H.; Chang, W.S. Clinical application of deep learning-based synthetic CT from real MRI to improve dose planning accuracy in Gamma Knife radiosurgery: A proof of concept study. Biomed. Eng. Lett. 2022, 12, 359–367. [Google Scholar] [CrossRef]
  75. Kang, S.K.; An, H.J.; Jin, H.; Kim, J.-I.; Chie, E.K.; Park, J.M.; Lee, J.S. Synthetic CT generation from weakly paired MR images using cycle-consistent GAN for MR-guided radiotherapy. Biomed. Eng. Lett. 2021, 11, 263–271. [Google Scholar] [CrossRef]
  76. Bourbonne, V.; Jaouen, V.; Hognon, C.; Boussion, N.; Lucia, F.; Pradier, O.; Bert, J.; Visvikis, D.; Schick, U. Dosimetric Validation of a GAN-Based Pseudo-CT Generation for MRI-Only Stereotactic Brain Radiotherapy. Cancers 2021, 13, 1082. [Google Scholar] [CrossRef]
  77. Han, X. MR-based synthetic CT generation using a deep convolutional neural network method. Med. Phys. 2017, 44, 1408–1419. [Google Scholar] [CrossRef]
  78. Liu, X.; Emami, H.; Nejad-Davarani, S.P.; Morris, E.; Schultz, L.; Dong, M.; Glide-Hurst, C.K. Performance of deep learning synthetic CTs for MR-only brain radiation therapy. J. Appl. Clin. Med. Phys. 2021, 22, 308–317. [Google Scholar] [CrossRef]
  79. Lei, Y.; Harms, J.; Wang, T.; Liu, Y.; Shu, H.; Jani, A.B.; Curran, W.J.; Mao, H.; Liu, T.; Yang, X. MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks. Med. Phys. 2019, 46, 3565–3581. [Google Scholar] [CrossRef]
  80. Liu, Y.; Lei, Y.; Wang, Y.; Shafai-Erfani, G.; Wang, T.; Tian, S.; Yang, X. Evaluation of a Deep Learning-Based Pelvic Synthetic CT Generation Technique for MRI-Based Prostate Proton Treatment Planning. Phys. Med. Biol. 2019, 64, 205022. [Google Scholar] [CrossRef] [PubMed]
  81. Peng, Y.; Chen, S.; Qin, A.; Chen, M.; Gao, X.; Liu, Y.; Miao, J.; Gu, H.; Zhao, C.; Deng, X.; et al. Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning. Radiother. Oncol. 2020, 150, 217–224. [Google Scholar] [CrossRef] [PubMed]
  82. Wang, Y.; Liu, C.; Zhang, X.; Deng, W. Synthetic CT Generation Based on T2 Weighted MRI of Nasopharyngeal Carcinoma (NPC) Using a Deep Convolutional Neural Network (DCNN). Front. Oncol. 2019, 9, 1333. [Google Scholar] [CrossRef] [PubMed]
  83. Zhao, Y.; Wang, H.; Yu, C.; Court, L.E.; Wang, X.; Wang, Q.; Pan, T.; Ding, Y.; Phan, J.; Yang, J. Compensation cycle consistent generative adversarial networks (Comp-GAN) for synthetic CT generation from MR scans with truncated anatomy. Med. Phys. 2023, 50, 4399–4414. [Google Scholar] [CrossRef] [PubMed]
  84. McKenzie, E.M.; Santhanam, A.; Ruan, D.; O’Connor, D.; Cao, M.; Sheng, K. Multimodality image registration in the head-and-neck using a deep learning-derived synthetic CT as a bridge. Med. Phys. 2020, 47, 1094–1104. [Google Scholar] [CrossRef]
  85. Willemsen, K.; Ketel, M.H.M.; Zijlstra, F.; Florkow, M.C.; Kuiper, R.J.A.; van der Wal, B.C.H.; Weinans, H.; Pouran, B.; Beekman, F.J.; Seevinck, P.R.; et al. 3D-printed saw guides for lower arm osteotomy, a comparison between a synthetic CT and CT-based workflow. 3D Print. Med. 2021, 7, 13. [Google Scholar] [CrossRef]
  86. Bambach, S.; Ho, M.-L. Deep Learning for Synthetic CT from Bone MRI in the Head and Neck. Am. J. Neuroradiol. 2022, 43, 1172–1179. [Google Scholar] [CrossRef]
  87. Yang, H.; Qian, P.; Fan, C. An Indirect Multimodal Image Registration and Completion Method Guided by Image Synthesis. Comput. Math. Methods Med. 2020, 2020, 2684851. [Google Scholar] [CrossRef]
  88. Masoudi, S.; Anwar, S.M.; Harmon, S.A.; Choyke, P.L.; Turkbey, B.; Bagci, U. Adipose Tissue Segmentation in Unlabeled Abdomen MRI using Cross Modality Domain Adaptation. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; pp. 1624–1628. [Google Scholar]
  89. Roy, S.; Butman, J.A.; Pham, D.L. Synthesizing CT from Ultrashort Echo-Time MR Images via Convolutional Neural Networks; Springer: Cham, Switzerland, 2017. [Google Scholar]
  90. Emami, H.; Dong, M.; Glide-Hurst, C.K. Attention-Guided Generative Adversarial Network to Address Atypical Anatomy in Synthetic CT Generation. In Proceedings of the 2020 IEEE 21st International Conference on Information Reuse and Integration for Data Science (IRI), Las Vegas, NV, USA, 11–13 August 2020; pp. 188–193. [Google Scholar]
  91. Lyu, Q.; Wang, G. Conversion between ct and mri images using diffusion and score-matching models. arXiv 2022, arXiv:2209.12104. [Google Scholar]
  92. Kläser, K.; Markiewicz, P.; Ranzini, M.; Li, W.; Modat, M.; Hutton, B.F.; Atkinson, D.; Thielemans, K.; Cardoso, M.J.; Ourselin, S. Deep Boosted Regression for MR to CT Synthesis; Springer: Cham, Switzerland, 2018. [Google Scholar]
  93. Wolterink, J.M.; Dinkla, A.M.; Savenije, M.H.; Seevinck, P.R.; van den Berg, C.A.; Isgum, I. Deep MR to CT synthesis using unpaired data. In Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, Quebec City, QC, Canada, 10 September 2017; Springer: Cham, Switzerland, 2017. [Google Scholar]
  94. Yang, H.; Sun, J.; Carass, A.; Zhao, C.; Lee, J.; Xu, Z.; Prince, J. Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; DLMIA ML-CDS 2018; Lecture Notes in Computer Science Book Series; Springer: Cham, Switzerland, 2018; Volume 11045. [Google Scholar] [CrossRef]
  95. Shi, Z.; Mettes, P.; Zheng, G.; Snoek, C. Frequency-Supervised MR-to-CT Image Synthesis; Springer: Cham, Switzerland, 2021. [Google Scholar]
  96. Olberg, S.; Chun, J.; Choi, B.S.; Park, I.; Kim, H.; Kim, T.; Kim, J.S.; Green, O.; Park, J.C. Abdominal synthetic CT reconstruction with intensity projection prior for MRI-only adaptive radiotherapy. Phys. Med. Biol. 2021, 66, 204001. [Google Scholar] [CrossRef] [PubMed]
  97. Nijskens, L.; Berg, C.A.v.D.; Verhoeff, J.J.; Maspero, M. Exploring contrast generalisation in deep learning-based brain MRI-to-CT synthesis. Phys. Med. 2023, 112, 102642. [Google Scholar] [CrossRef] [PubMed]
  98. Kläser, K.; Varsavsky, T.; Markiewicz, P.; Vercauteren, T.; Atkinson, D.; Thielemans, K.; Hutton, B.; Cardoso, M.J.; Ourselin, S. Improved MR to CT Synthesis for PET/MR Attenuation Correction Using Imitation Learning; Springer: Cham, Switzerland, 2019. [Google Scholar]
  99. Gholamiankhah, F.; Mostafapour, S.; Arabi, H. Deep learning-based synthetic CT generation from MR images: Comparison of generative adversarial and residual neural networks. Int. J. Radiat. Res. 2022, 20, 121–130. [Google Scholar] [CrossRef]
  100. Rajagopal, A.; Natsuaki, Y.; Wangerin, K.; Hamdi, M.; An, H.; Sunderland, J.J.; Laforest, R.; Kinahan, P.E.; Larson, P.E.; Hope, T.A. Synthetic PET via Domain Translation of 3-D MRI. IEEE Trans. Radiat. Plasma Med. Sci. 2022, 7, 333–343. [Google Scholar] [CrossRef]
  101. Hussein, R.; Zhao, M.Y.; Shin, D.; Guo, J.; Chen, K.T.; Armindo, R.D.; Davidzon, G.; Moseley, M.; Zaharchuk, G. Multi-task Deep Learning for Cerebrovascular Disease Classification and MRI-to-PET Translation. In Proceedings of the 2022 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 21–25 August 2022; pp. 4306–4312. [Google Scholar]
  102. Sikka, A.; Virk, J.S.; Bathula, D.R. MRI to PET Cross-Modality Translation using Globally and Locally Aware GAN (GLA-GAN) for Multi-Modal Diagnosis of Alzheimer’s Disease. arXiv 2021, arXiv:2108.02160. [Google Scholar]
  103. Li, Q.; Zhu, X.; Zou, S.; Zhang, N.; Liu, X.; Yang, Y.; Zheng, H.; Liang, D.; Hu, Z. Eliminating CT radiation for clinical PET examination using deep learning. Eur. J. Radiol. 2022, 154, 110422. [Google Scholar] [CrossRef]
  104. Cohen, J.P.; Luck, M.; Honari, S. Distribution Matching Losses Can Hallucinate Features in Medical Image Translation; Springer: Cham, Switzerland, 2018; pp. 529–536. [Google Scholar]
Figure 1. The PRISMA diagram detailing this systematic review.
Figure 1. The PRISMA diagram detailing this systematic review.
Bioengineering 10 01078 g001
Figure 2. Breakdown of type of synthesis.
Figure 2. Breakdown of type of synthesis.
Bioengineering 10 01078 g002
Figure 3. Year of publication of the reviewed studies.
Figure 3. Year of publication of the reviewed studies.
Bioengineering 10 01078 g003
Figure 4. Methods for evaluating the synthetic images.
Figure 4. Methods for evaluating the synthetic images.
Bioengineering 10 01078 g004
Figure 5. Stated motivations for medical image synthesis.
Figure 5. Stated motivations for medical image synthesis.
Bioengineering 10 01078 g005
Figure 6. Deep learning frameworks used for medical image synthesis.
Figure 6. Deep learning frameworks used for medical image synthesis.
Bioengineering 10 01078 g006
Figure 7. Boxplot of number of patients comprising dataset (axis limited to exclude extremes). Blue X marks the mean.
Figure 7. Boxplot of number of patients comprising dataset (axis limited to exclude extremes). Blue X marks the mean.
Bioengineering 10 01078 g007
Table 1. Search terms used for the electronic databases.
Table 1. Search terms used for the electronic databases.
CategorySearch Terms
Machine Learningmachine learning, GAN, generative adversarial network, convolutional neural network, artificial intelligence, deep learning
Image Generationsynth*, generat*, pseudo*, transform*
Medical ImagingMRI, MR, CT, PET
Table 2. Descriptions of the Synthesis Type and Motivations categories.
Table 2. Descriptions of the Synthesis Type and Motivations categories.
Extracted VariableCategoriesDescription
Synthesis TypeCT to MRIUsing a CT to generate an MRI
MRI to CTUsing an MRI to generate a CT
Cross MRIUsing one MRI sequence to generate a different MRI modality. For example, using a T1w MRI to generate a T2w MRI.
MRI to PETUsing an MRI to generate a PET
PET to CTUsing a PET to generate a CT
MotivationsAid DiagnosisSynthesizing unobtained scans to provide extra information for diagnosis
Missing DataImproving paired datasets by synthesizing missing scans
Memory EfficiencyImproving the memory efficiency of synthesis models so that high quality scans can be synthesized
Attenuation CorrectionSynthesizing scans of a modality which can aid in attenuation correction of PETs
Multimodal RegistrationSynthesizing scans of a modality which is simpler to register to the target.
MRI-only Radiation TherapySynthesizing a CT so that a patient only requires an MRI before radiation therapy
Reduce RadiationSynthesizing a scan which would otherwise expose the patient to radiation
SegmentationSynthesizing scans of a modality which can help segmentation models either in training or in segmenting the scan
Table 3. A summary of the data extracted from the reviewed literature.
Table 3. A summary of the data extracted from the reviewed literature.
PaperYearSynthesis TypeMotivationsBody PartModel FrameworkNumber of PatientsEvaluation Methods
[5]2020Cross MRIAid DiagnosisBrainGAN274SSIM, PSNR, NMSE
[6]2022Cross MRIAid DiagnosisBrainGAN, CNNUnspecifiedSSIM, MSE, PSNR, VIF, FID
[7]2020Cross MRIAid DiagnosisBrainCNN15PSNR, SSIM, HFEN
[8]2022Cross MRIAid DiagnosisBrainCNNUnspecifiedMAPE, RSMPE, SSIM
[9]2020Cross MRIIncrease DataBrainGAN1113Estimated Divergence
[10]2020Cross MRIMemory EfficiencyBrainGAN274SSIM, MAE, PSNR, MSE
[11]2022Cross MRIAid Diagnosis,
Increase Data
BrainGAN127MAE, SSIM, PSNR, MI
[12]2019Cross MRIAid DiagnosisBrainCNN15SSIM
[13]2023Cross MRIAid DiagnosisBrainGAN128MAE, SSIM, PSNR
[14]2022Cross MRISegmentationBrainGAN210DSC, ASSD
[15]2022Cross MRIAid DiagnosisBrainGAN285SSIM, PSNR, Experts
[16]2023Cross MRIIncrease DataBrainGAN372MSE, SSIM
[17]2021Cross MRIIncrease DataBrainGAN199MSE, SSIM, PSNR
[18]2022CT to MRIAid DiagnosisLumbarGAN285SSIM, PSNR, Experts
[19]2020CT to MRIIncrease DataBrainGAN, CNN34MAE, SSIM, PSNR
[20]2021CT to MRIIncrease DataPelvisGAN, CNN17PSNR, SSIM, Experts, DSC
[21]2021CT to MRIIncrease DataHead and NeckGAN202Segmentation
[22]2020CT to MRIMultimodal Registration, Aid DiagnosisBrainGAN, CNN34MAE, MSE, SSIM, PSNR
[23]2019CT to MRISegmentationPelvisGAN140Segmentation
[24]2021CT to MRISegmentationHead and NeckGAN118Segmentation
[25]2023CT to MRIAid Diagnosis, Multimodal RegistrationBrainGAN, CNN181MAE, MSE, PSNR, SSIM, Registration, DSC
[26]2019CT to MRISegmentation, Aid DiagnosisBrainGAN94DSC, HD
[27]2022CT to MRIAid DiagnosisBrainGAN103Experts
[28]2021CT to MRI, MRI to CTAid DiagnosisProstateGAN271KID, FID, DSC
[29]2022MRI to CTAttenuation CorrectionWhole BodyCNN46MAE, Regional
Analysis, Correlation
[30]2021MRI to CTAid DiagnosisLumbarCNN30Regional Analysis
[31]2022MRI to CTAid DiagnosisHipCNN27Regional Analysis
[32]2021MRI to CTAid DiagnosisSacroiliac JointCNN30Diagnostic Accuracy
[33]2022MRI to CTAid DiagnosisHipCNN30Regional Analysis
[34]2023MRI to CTAid DiagnosisKneeCNN69Diagnostic Accuracy
[35]2023MRI to CTMRI-only Radiation TherapyBrainGAN, CNN104MAE, Dosimetric
[36]2020MRI to CTMRI-only Radiation TherapyBrainGAN77MAE
[37]2022MRI to CTMRI-only Radiation TherapyHead and NeckCNN47MAE, SSIM, Dosimetric
[38]2020MRI to CTMRI-only Radiation TherapyBrainGAN60MAE, Dosimetric
[39]2022MRI to CTMRI-only Radiation TherapyHead and NeckGAN206MAE, Dosimetric
[40]2019MRI to CTMRI-only Radiation TherapyLiverGAN21MAE, Dosimetric
[41]2021MRI to CTMRI-only Radiation TherapyHead and NeckGAN56MAE, SSIM, PCC, FID, SWD, BD, PSNR, DSC
[42]2020MRI to CTMRI-only Radiation TherapyPelvisCNN15ME, MAE, SSIM, PSNR, PCC
[43]2021MRI to CTMRI-only Radiation TherapyPelvisGAN, CNN20ME, MAE, PCC, SSIM, PSNR
[44]2021MRI to CTMRI-only Radiation TherapyHead and NeckGAN, CNN164MAE, ME, PSNR
[45]2021MRI to CTMRI-only Radiation TherapyProstateGAN113ME, MAE, PSNR
[46]2021MRI to CTMRI-only Radiation TherapyBrainGAN, CNN18MAE, MSE, PSNR, SSIM, PCC
[47]2019MRI to CTMRI-only Radiation TherapyLiverGAN, CNN21NCC, MAE, PSNR
[48]2019MRI to CTMRI-only Radiation TherapyBrainGAN77MAE, DSC
[49]2021MRI to CTMRI-only Radiation TherapyHead and NeckCNN23MAE, Dosimetric
[50]2023MRI to CTMRI-only Radiation TherapyAbdomenGAN, CNN76Dosimetric
[51]2019MRI to CTMRI-only Radiation TherapyHead and NeckCNN34MAE, ME, Dosimetric
[52]2021MRI to CTMRI-only Radiation TherapyBrainGAN37Dosimetric
[53]2020MRI to CTMRI-only Radiation TherapyPelvisGAN120Dosimetric
[54]2019MRI to CTMRI-only Radiation TherapyBrainCNN60MAE
[55]2023MRI to CTMRI-only Radiation TherapyAbdomenCNN39MAE, Dosimetric
[56]2022MRI to CTMRI-only Radiation TherapyProstateGAN39MAE, ME, MAPE, DSC
[57]2020MRI to CTMRI-only Radiation TherapyAbdomenGAN12MAE, Dosimetric
[58]2022MRI to CTMRI-only Radiation TherapyThoraxGAN60MAE, ME, Dosimetric
[59]2022MRI to CTMRI-only Radiation TherapyBrainGAN24MAE, PSNR, SSIM
[60]2021MRI to CTMRI-only Radiation TherapyBrainCNN30ME, MAE, MSE
[61]2021MRI to CTMRI-only Radiation TherapyPelvisGAN38MAE, Dosimetric
[62]2020MRI to CTMRI-only Radiation TherapyPelvisGAN19MAE
[63]2020MRI to CTMRI-only Radiation TherapyAbdomenCNN31MAE, Dosimetric
[64]2021MRI to CTMRI-only Radiation TherapyHead and NeckCNN, GAN35MAE, SSIM, PSNR
[65]2022MRI to CTMRI-only Radiation TherapyPelvisGAN40Dosimetric
[66]2021MRI to CTMRI-only Radiation TherapyBrainCNN20MAE, Dosimetric
[67]2022MRI to CTMRI-only Radiation TherapyBrainCNN21Dosimetric
[68]2018MRI to CTMRI-only Radiation TherapyPelvisGAN91Dosimetric
[69]2020MRI to CTMRI-only Radiation TherapyHead and NeckGAN, CNN45MAE, SSIM, PSNR, DSC, Dosimetric
[70]2020MRI to CTMRI-only Radiation TherapyPelvisCNN23MAE, ME, DSC, Regional Analysis, PSNR
[71]2021MRI to CTMRI-only Radiation TherapyProstateCNN30MAE
[72]2019MRI to CTMRI-only Radiation TherapyThoraxGAN60RMSE, SSIM, PSNR, Dosimetric
[73]2022MRI to CTMRI-only Radiation TherapyProstateGAN57MAE, PSNR, SSIM, Dosimetric
[74]2022MRI to CTMRI-only Radiation TherapyBrainGAN54MAE, SSIM, Dosimetric
[75]2021MRI to CTMRI-only Radiation TherapyPelvisGAN, CNN30MAE, RMSE, PSNR, SSIM
[76]2021MRI to CTMRI-only Radiation TherapyBrainGAN184Dosimetric
[77]2017MRI to CTMRI-only Radiation TherapyBrainCNN18MAE, MSE, PCC
[78]2021MRI to CTMRI-only Radiation TherapyBrainGAN12Dosimetric, Registration
[79]2019MRI to CTMRI-only Radiation TherapyBrainGAN24MAE, PSNR, NCC
[80]2019MRI to CTMRI-only Radiation TherapyProstateGAN17MAE, Dosimetric
[81]2020MRI to CTMRI-only Radiation TherapyHead and NeckGAN173MAE, Dosimetric
[82]2019MRI to CTMRI-only Radiation TherapyHead and NeckCNN33MAE, ME
[83]2023MRI to CTMRI-only Radiation TherapyHead and NeckGAN79MAE, PSNR, SSIM
[75]2021MRI to CTMRI-only Radiation TherapyThoraxGAN, CNN30MAE, RMSE, PSNR, SSIM
[75]2021MRI to CTMRI-only Radiation TherapyAbdomenGAN, CNN30MAE, RMSE, PSNR, SSIM
[79]2019MRI to CTMRI-only Radiation TherapyPelvisGAN20MAE, PSNR, NCC
[19]2020MRI to CTMRI-only Radiation TherapyBrainGAN, CNN34MAE, SSIM, PSNR
[84]2021MRI to CTMultimodal RegistrationHead and NeckGAN25Registration
[85]2021MRI to CTReduce RadiationLower ArmGAN8Surgical Planning Errors
[86]2022MRI to CTReduce RadiationHead and NeckCNN39MAE, MSE
[87]2020MRI to CTMultimodal RegistrationHead and NeckGAN, CNN9MAE, PCC, SLPD
[88]2022MRI to CTSegmentationAbdomenGAN34Segmentation
[89]2018MRI to CTAttenuation CorrectionBrainCNN7PSNR, Correlation
[90]2020MRI to CTMRI-only Radiation TherapyBrainGAN15MAE
[91]2022MRI to CTAid DiagnosisPelvisGAN, CNN19SSIM
[92]2018MRI to CTAttenuation CorrectionBrainCNN20MAE, PET Reconstruction
[93]2017MRI to CTMRI-only Radiation TherapyBrainGAN24MAE, PSNR
[94]2018MRI to CTMRI-only Radiation TherapyBrainGAN45MAE, PSNR, SSIM
[95]2021MRI to CTMRI-only Radiation TherapyBrainGAN45MAE, PSNR, SSIM
[96]2021MRI to CTMRI-only Radiation TherapyAbdomenGAN89MAE, DSC
[97]2023MRI to CTMRI-only Radiation TherapyBrainGAN95MAE, GPR
[98]2019MRI to CTAttenuation CorrectionBrainCNN400MAE, PET Reconstruction
[99]2021MRI to CTMRI-only Radiation TherapyBrainGAN, CNN86MAE, SSIM, PSNR
[100]2021MRI to PETIncrease DataWhole BodyCNN56AC
[101]2022MRI to PETAid DiagnosisBrainCNN120PSNR, SSIM
[102]2021MRI to PETAid DiagnosisBrainGAN481MAE, SSIM, PSNR
[103]2022PET to CTReduce Radiation, Attenuation CorrectionWhole BodyGAN34NRMSE, PSNR, PCC, SSIM
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

McNaughton, J.; Fernandez, J.; Holdsworth, S.; Chong, B.; Shim, V.; Wang, A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering 2023, 10, 1078. https://doi.org/10.3390/bioengineering10091078

AMA Style

McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering. 2023; 10(9):1078. https://doi.org/10.3390/bioengineering10091078

Chicago/Turabian Style

McNaughton, Jake, Justin Fernandez, Samantha Holdsworth, Benjamin Chong, Vickie Shim, and Alan Wang. 2023. "Machine Learning for Medical Image Translation: A Systematic Review" Bioengineering 10, no. 9: 1078. https://doi.org/10.3390/bioengineering10091078

APA Style

McNaughton, J., Fernandez, J., Holdsworth, S., Chong, B., Shim, V., & Wang, A. (2023). Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering, 10(9), 1078. https://doi.org/10.3390/bioengineering10091078

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop