Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review
Abstract
:1. Introduction
2. Materials and Methods
2.1. Literature Search
2.2. Article Selection
2.3. Data Extraction and Synthesis
3. Results
Author, Year & Country | Modality | GAN Architecture | Study Design | Patient/Population | Dataset Source | Dataset Size | Sample Size Calculation | Application Area | Commercial Availability | Internal Validation Type | External Validation | Reference Standard | AI VS Clinician | Key Findings |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Disease Diagnosis | ||||||||||||||
Kuttala et al. (2022)—Australia, India, and the United Arab Emirates [77] | MRI | GAN | Retrospective | Children (median ages: 12.6 (baseline) and 15.0 (follow-up) years | Public brain MRI dataset (Autism Brain Imaging Data Exchange II) | Total: 70 scans-training: 24; testing: 46 | No | Autism diagnosis based on brain MRI images | No | NR | No | NR | No | 158.6% accuracy (U-Net: 0.370; GAN: 0.957) and 114.3% AUC (U-Net: 0.420; GAN: 0.900) improvements for autism diagnoses, respectively |
Kuttala et al. (2022)—Australia, India, and the United Arab Emirates [78] | MRI | GAN | Retrospective | Children (median ages: 12 (baseline) and 15 (follow-up) years | Public brain MRI datasets (ADHD-200 and Autism Brain Imaging Data Exchange II) | Total: 265 scans-training: 48; testing: 217 | No | ADHA and autism diagnosis based on brain MRI images | No | NR | No | NR | No | 29.6% and 39.7% accuracy improvements for ADHD and autism diagnoses (3D CNN: 0.659 and 0.700; GAN: 0.854 and 0.978), respectively. GAN AUC: 0.850 (ADHD) and 0.910 (autism) |
Motamed and Khalvati (2021)—Canada [79] | X-ray | DCGAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 4875 images-training: 3875; testing: 1000 | No | Pneumonia diagnosis based on CXR | No | NR | No | NR | No | 3.5% AUC improvement (Deep SVDD: 0.86; DCGAN: 0.89) |
Image Reconstruction | ||||||||||||||
Dittimi and Suen (2020)—Canada [80] | X-ray | GAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5863 images | No | CXR image reconstruction (super-resolution) | No | 70:30 random split | No | Original CXR images | No | 19.1% SSIM (SRCNN: 0.832; SRCNN-GAN: 0.991) and 46.5% PSNR (SRCNN: 26.18; SRCNN-GAN: 38.36 dB) improvements |
Fu et al. (2022)—China [81] | PET | TransGAN | Retrospective | Children | Private brain PET dataset by Hangzhou Universal Medical Imaging Diagnostic Center, China | Total: 45 scans | No | Brain PET image reconstruction (denoising) | No | 10-fold cross-validation | No | Original full-dose PET images | No | 10.3% SSIM (U-Net: 0.861; TransGAN-SDAM: 0.950) and 29.9% PSNR (U-Net: 26.1; TransGAN-SDAM: 33.9 dB) improvements with 67.7% VSMD reduction (U-Net: 0.133; TransGAN-SDAM: 0.043) |
Park et al. (2022)—Republic of Korea [82] | CT | GAN | Retrospective | 3 groups of children (mean ages (years): 6.2 ± 2.2; 7.2 ± 2.5; 7.4 ± 2.2) | Private abdominal CT dataset | Total: 3160 images-training: 1680; validation: 820; testing: 660 | No | Low-dose abdominal CT image reconstruction (denoising) | No | NR | Yes | Consensus of 1 pediatric and 1 abdominal radiologist (6 and 8 years’ experiences), respectively. | Yes | 42.7% noise reduction (LDCT: 12.4 ± 5.0; SAFIRE: 9.5 ± 4.0; GAN: 7.1 ± 2.7), and 39.3% (portal vein) and 45.8% (liver) SNR (LDCT: 22.9 ± 9.3 and 13.1 ± 5.7; SAFIRE: 30.1 ± 12.2 and 17.3 ± 7.6; GAN: 31.9 ± 13.0 and 19.1 ± 7.9) and 30.9% (portal vein) and 32.8% (liver) CNR (LDCT: 16.2 ± 7.5 and 6.4 ± 3.7; SAFIRE: 21.2 ± 9.8 and 8.5 ± 5.0; GAN: 21.2 ± 10.1 and 8.5 ± 4.3) improvements when compared with LDCT images, respectively. |
Pham et al. (2019)—France [83] | MRI | 3D GAN | Retrospective | Neonates | Public (Developing Human Connectome Project) and private brain MRI datasets by Reims Hospital, France | Total: 40 images-training: 30; testing: 10 | No | Brain MRI image reconstruction (super-resolution) and segmentation | No | NR | Yes | NR | No | 1.39% SSIM (non-DL: 0.9492; SRCNN: 0.9739; GAN: 0.9624) and 3.42% PSNR (non-DL: 30.70 dB; SRCNN: 35.84 dB; GAN: 31.75 dB) improvements for super-resolution and 12.4% DSC improvement for segmentation (atlas-based: 0.788; intensity-based: 0.818; GAN: 0.886) when compared with non-DL approaches, respectively |
Image Segmentation | ||||||||||||||
Decourt and Duong (2020)—Canada and France [84] | MRI | GAN | Retrospective | 2–18-year-old children | Private cardiac MRI dataset by Hospital for Sick Children in Toronto, Canada | Total: 33 scans-training: 25; validation: 4; testing: 4 | No | Cardiac MRI image segmentation | No | Cross-validation | Yes | Manual segmentation by clinicians | No | 2.4% mean DSC improvement (U-Net: 0.85; GAN: 0.87) with 3.8% mean HD reduction (U-Net: 2.55 mm; GAN: 2.46 mm) |
Guo et al. (2019)—China [85] | US | DNGAN | NR | 0–10-year-old children | Private echocardiography dataset by a Chinese hospital | Total: 87 scans-training: 1765 images; testing: 451 images | No | Echocardiography image segmentation | No | NR | No | NR | No | 4.6% mean DSC (U-Net: 0.88; DNGAN: 0.92), 7.6% mean Jaccard index (U-Net: 0.80; DNGAN: 0.86) and 8.5% mean PPV (U-Net: 0.86; DNGAN: 0.94) improvements but with 0.9% mean sensitivity reduction (U-Net: 0.93; DNGAN: 0.92) |
Kan et al. (2021)—USA [86] | CT | AC-GAN | NR | 1–17-year-old children | Private abdominal CT dataset by Medical College of Wisconsin, USA | Total: 64 scans | No | Abdominal CT image segmentation | No | 4-fold cross-validation | No | NR | No | 3.9% and 0.7% mean DSC improvements (U-Net: 0.697 and 0.923; GAN: 0.724 and 0.929) with 35.0% and 13.3% mean HD reductions (U-Net: 1.090 and 0.390 mm; GAN: 0.709 and 0.338 mm) for uterus and prostate segmentations, respectively |
Karimi-Bidhendi et al. (2020)—USA [87] | MRI | DCGAN | Retrospective | 2–18-year-old children | Public cardiac MRI datasets by Children’s Hospital Los Angeles, USA, and ACDC | Total: 159 scans-training: 41; testing: 118 | No | Cardiac MRI image segmentation | No | 80:20 random split | Yes | Manual image segmentation by a pediatric cardiologist sub-specialized in cardiac MRI | No | 34.5% mean DSC (cvi42: 0.631; U-Net: 0.782; DCGAN: 0.848), 38.5% Jaccard index (cvi42: 0.556; U-Net: 0.702; DCGAN: 0.770), 53.2% R2 (cvi42: 0.629; U-Net: 0.871; DCGAN: 0.963), 30.8% sensitivity (cvi42: 0.666; U-Net: 0.775; DCGAN: 0.872), 0.1% specificity (cvi42: 0.997; U-Net: 0.998; DCGAN: 0.998), 34.0% PPV (cvi42: 0.636; U-Net: 0.839; DCGAN: 0.852) and 0.4% NPV (cvi42: 0.995; U-Net: 0.997; DCGAN: 0.998) improvements with 24.7% mean HD (cvi42: 11.0 mm; U-Net: 11.0 mm; DCGAN: 8.3 mm) and 31.6% MCD reductions (cvi42: 4.4 mm; U-Net: 4.5 mm; DCGAN: 3.0 mm) when compared with cvi42 |
Zhou et al. (2022)—Canada [88] | US | pix2pix GAN | Prospective | Children | Private wrist US dataset by University of Alberta Hospital, Canada | Total: 57 scans-training: 47; testing: 10 | No | Wrist US image segmentation | No | NR | No | Manual segmentation by radiologist and sonographer with 18 and 7 years’ experience, respectively | No | 7.5% sensitivity improvement (U-Net: 0.642; GAN: 0.690) but with 5.6% DSC (U-Net: 0.698; GAN: 0.659), 8.6% Jaccard index (U-Net: 0.548; GAN: 0.501) and 17.8% PPV (U-Net: 0.783; GAN: 0.644) reductions |
Image Synthesis and Data Augmentation | ||||||||||||||
Banerjee et al. (2021)—India [89] | X-ray | GAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5863 images | No | CXR image synthesis and data augmentation for DL-CAD model training | No | NR | No | NR | No | 13,921 images were generated for training the DL-CAD model for pneumonia with 6.3% accuracy improvement (with and without GAN: 0.986 and 0.928), respectively |
Diller et al. (2020)—Germany [90] | MRI | PG-GAN | Retrospective | Children with a median age of 15 years (IQR: 12.8–19.3 years) | Private cardiac MRI dataset by German Competence Network for Congenital Heart Defects | Total: 303 scans | No | Cardiac MRI image synthesis and data augmentation | No | NR | No | Ground truth determined by researchers | Yes | Mean rates of PG-GAN generated images identified by clinicians being fake: 70.5% (3 cardiologists) and 86.7% (2 cardiac MRI experts) |
Guo et al. (2021)—China [91] | X-ray | AC-GAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5856 images-training: 1500; testing: 4356 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | NR | No | NR | No | 250 pneumonia and 250 normal images generated for DL-CAD model training with 0.6% accuracy improvement (with and without AC-GAN: 0.913 and 0.907), respectively |
Guo et al. (2022)—China [92] | X-ray | AC-GAN | Prospective | 2–14-year-old children | Private CXR dataset by Quanzhou Women’s and Children’s Hospital, China | Total: 6442 images-training: 3600 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | NR | No | NR | No | 2000 images generated with 7.7% and 13.5% differences between ground truth (IS: 2.08) and AC-GAN generated normal (IS: 1.92) and pneumonia (IS: 1.80) images, respectively. The use of AC-GAN images for training the DL-CAD model improved sensitivity (with and without AC-GAN: 0.86 and 0.62), specificity (with and without AC-GAN: 0.97 and 0.90), and accuracy (with and without AC-GAN: 0.91 and 0.76) by 38.7%, 7.8%, and 19.7%, respectively |
Kan et al. (2020)-USA [93] | CT | AC-GAN | NR | 1–18-year-old children | NR | Total: 5 scans | No | Pancreatic CT image synthesis and data augmentation | No | NR | No | NR | No | AC-GAN was able to generate high-resolution pancreas images with fine details and without any streak artifact and irregular pancreas contour when compared with DCGAN |
Khalifa et al. (2022)-Egypt [94] | X-ray | GAN | Retrospective | 1–5-year-old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 624 images | No | CXR image synthesis and data augmentation for DL-CAD model training | No | 80:20 random split | No | Specialist consensus | No | 5616 images generated for training the DL-CAD model for pneumonia with 6.7% accuracy improvement (with and without GAN: 0.990 and 0.928), respectively |
Kora Venu (2021)-USA [95] | X-ray | DCGAN | Retrospective | 1–5 years old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5856 images-training: 4684; testing: 1172 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | 80:20 random split | No | NR | No | 2152 images generated for training DL-CAD model for pneumonia with 2.6% AUC (with and without DCGAN: 0.993 and 0.968), 6.5% sensitivity (with and without DCGAN: 0.993 and 0.932), 13.5% PPV (with and without DCGAN: 0.990 and 0.872), 6.4% accuracy (with and without DCGAN: 0.987 and 0.928) and 10.0% F1 score improvements (with and without DCGAN: 0.991 and 0.901), respectively |
Li and Ke (2022)-USA [96] | X-ray | DCGAN | Retrospective | 1–5 years old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5910 images-training: 4300; validation: 724; testing: 886 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | 90:10 random split | No | NR | No | 2700 images generated for training DL-CAD model for pneumonia with 13.7% accuracy (with and without DCGAN: 0.960 and 0.844) and 1.1% AUC (with and without DCGAN: 0.994 and 0.983) improvements, respectively |
Prince et al. (2020)-Canada and USA [97] | CT and MRI | GAN | Retrospective | Children | Public (ATPC Consortium) and private brain CT-MRI datasets by Children’s Hospital Colorado and St. Jude Children’s Research Hospital, USA | Total: 86 CT-MRI scans-training: 53; testing: 33 | No | Brain CT-MRI image synthesis and data augmentation for DL-CAD model training | No | 60:40 random split and 5-fold cross-validation | No | Histology | Yes | 2000 CT and 2000 MRI images generated for training DL-CAD model for adamantinomatous craniopharyngioma with 0.890 (CT) and 0.974 (MRI) accuracy. 17.0% AUC improvement for MRI (radiologists: 0.833; GAN: 0.975) but 1.6% AUC reduction for CT (radiologists: 0.894; GAN: 0.880). |
Su et al. (2021)-China [98] | X-ray | WGAN | Retrospective | 1–19 years old children | Public hand X-ray dataset (RSNA Pediatric Bone Age Challenge) | Total: 14,236 images-training: 12,611; validation: 1425; testing: 200 | No | Hand X-ray image synthesis and data augmentation, and bone age assessment | No | NR | No | Manual assessment by expert clinicians | No | 11,350 images generated with 7.9 IS, 17.3 FID and 20.0% MAE reduction (CNN: 5.29 months; WGAN: 4.23 months) |
Szepesi and Szilágyi (2022)-Hungary and Romania [99] | X-ray | GAN | Retrospective | 1–5 years old children | Public CXR dataset by Guangzhou Women and Children’s Medical Center, China | Total: 5856 images-training: 4099; validation: 586; testing: 1171 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | 10-fold cross-validation | No | Expert clinicians | No | 2152 images generated for training DL-CAD model for pneumonia with 0.9820 AUC, 0.9734 sensitivity, 0.9740 PPV, 0.9721 accuracy, and 3.9% F1 score improvement (CNN: 0.9375; GAN: 0.9740) |
Vetrimani et al. (2023)-India [100] | X-ray | DCGAN | Retrospective | 1–8 years old children | Public CXR datasets by Guangzhou Women and Children’s Medical Center, China and from various websites such as Radiopaedia | Total: 987 images-training: 645; validation: 342 | No | CXR image synthesis and data augmentation for DL-CAD model training | No | NR | No | NR | No | Additional images generated by DCGAN for training DL-CAD model for laryngotracheobronchitis with 0.8791 sensitivity, 0.854 PPV, 0.8832 accuracy and 0.8666 F1 score. |
Image Translation | ||||||||||||||
Chen et al. (2021)-China and USA [101] | MRI | 3D CycleGAN | Retrospective | Neonates | Private brain MRI datasets by Xi’an Jiaotong University, China and University of North Carolina, USA | Total: 40 images | No | Image translation (for domain adaptation in brain MRI image segmentation) | No | NR | No | NR | No | 1.2% mean DSC improvement (with and without 3D CycleGAN: 0.86 and 0.85) with 12.8% mean HD (with and without 3D CycleGAN: 13.03 and 14.94 mm) and 16.0% MSD (with and without 3D CycleGAN: 0.23 and 0.27 mm) reductions, respectively |
Hržić et al. (2021)-Austria, Croatia and Germany [102] | X-ray | CycleGAN | Retrospective | Children (mean age: 11 ± 4 years) | Private wrist X-ray dataset by Medical University of Graz, Austria | Total: 9672 images- training: 7600; validation: 636; testing: 1436 | No | Wrist X-ray image translation (cast suppression) | No | NR | No | Real castless wrist X-ray images | No | Real castless and CycleGAN generated cast suppressed image histogram similarity scores: 0.998 (correlation) and 222,503 (intersection) with difference values: 59,451 (chi-square distance) and 0.147 (Hellinger distance) |
Kaplan et al. (2022)-USA and Germany [103] | MRI | 3D CycleGAN | Prospective | Neonates (mean PMA: 41.1 ± 1.5 weeks) and infants (mean age: 41.2 ± 1.9 weeks) | Private brain MRI datasets by Washington University and ECHO Program, USA | Total: 137 scans-training: 107; testing: 30 | No | Brain MRI image translation (T1w-to-T2w) | No | NR | Yes | Real T2w MRI images acquired from same patients | No | 9.7% and 7.9% SSIM and DSC improvements (Kaplan-T2w: 0.72 and 0.76; CycleGAN: 0.79 and 0.82) with 18.8% relative MAE reduction (Kaplan-T2w: 6.9; CycleGAN: 5.6) and no statistically significant CNR difference (Kaplan-T2w: 0.76; CycleGAN: 0.63; original images: 0.62), respectively |
Khalili et al. (2019)-The Netherlands [104] | MRI | CycleGAN | NR | Neonates (mean PMA: 30.7 ± 1.0 weeks) | Private brain MRI dataset by University Medical Center Utrecht, The Netherlands | Total: 80 scans-training: 35; testing: 45 | No | Brain MRI image translation between motion blurred and blurless ones for training DL-segmentation model | No | NR | No | NR | No | 6.7% DSC improvement (with and without CycleGAN: 0.80 and 0.75) with 32.4% HD (with and without CycleGAN: 25.0 and 37.0 mm) and 60.5% MSD reductions (with and without CycleGAN: 0.5 and 1.3 mm) for segmentation, respectively. Median subjective image quality and segmentation accuracy ratings (scale 1–5): before (2 and 3) and after motion unsharpness correction (3 and 4), respectively |
Maspero et al. (2020)-The Netherlands [105] | MRI | 2D CGAN | Retrospective | 2.6–19 (mean: 10 ± 5) years old children | Private brain CT and T1w MRI dataset by University Medical Center Utrecht, The Netherlands | Total: 60 CT and MRI scans-training: 30; validation: 10; testing: 20 | No | Brain MRI image translation to CT for radiation therapy planning | No | 4-fold cross-validation | No | Real CT images acquired from same patients | No | DSC: 0.92; MAE: 61 HU for CT images generated from MRI images by CGAN |
Peng et al. (2020)-China, Japan and USA [106] | MRI | 3D GAN | Retrospective | 6–12 months old children | Public brain MRI dataset (Infant Brain Imaging Study) | Total: 578 scans-training: 462; validation: 58; testing: 58 | No | Brain MRI image translation between images acquired 6 months apart | No | NR | No | Real MRI images acquired from same patient 6 months apart | No | 1.5% DSC improvement (U-Net: 0.809; GAN: 0.821) and 7.5% MSD reduction (U-Net: 0.577 mm; GAN: 0.534 mm) but with 16.8% RVD increase (U-Net: 0.0424; GAN: 0.0495) |
Tang et al. (2019)-China and USA [107] | X-ray | CycleGAN | Retrospective | 1–5 years old children and adult | Public CXR datasets by Guangzhou Women and Children’s Medical Center, China and from RSNA Pneumonia Detection Challenge | Total: 17,508 images-training: 16,884; testing: 624 | No | Image translation (for domain adaptation of DL-CAD) | No | 5-fold cross-validation | No | NR | No | 7.8% AUC (with and without CycleGAN: 0.963 and 0.893), 11.1% sensitivity (with and without CycleGAN: 0.929 and 0.836), 12.7% specificity (with and without CycleGAN: 0.911 and 0.808), 12.8% accuracy (with and without CycleGAN: 0.931 and 0.825) and 8.1% F1 score (with and without CycleGAN: 0.930 and 0.860) improvements, respectively |
Tor-Diez et al. (2020)-USA [108] | MRI | CycleGAN | NR | Children | Private brain MRI datasets by Children’s National Hospital, Children’s Hospital of Philadelphia and Children’s Hospital of Colorado, USA | Total: 18 scans | No | Image translation (for domain adaptation in brain MRI image segmentation) | No | Leave-one-out cross-validation | No | NR | No | 18.3% DSC improvement for anterior visual pathway segmentation (U-Net: 0.509; CycleGAN: 0.602) |
Wang et al. (2021)-USA [109] | MRI | CycleGAN | Retrospective | 2 groups of children (median ages: 8.3 and 6.4 years; ranges: 1–20 and 2–14 years), respectively | Private brain CT and T1w MRI datasets by St Jude Children’s Research Hospital, USA | Total: 132 CT and MRI scans-training: 125; testing: 7 | No | Brain MRI image translation to CT for radiation therapy planning | No | NR | No | Real CT images acquired from same patients | No | SSIM: 0.90; DSC of air/bone: 0.86/0.81; MAE: 65.3 HU; PSNR: 28.5 dB for CT images generated from MRI images by CycleGAN |
Wang et al. (2021)-USA [110] | MRI | CycleGAN | Retrospective | 1.1–21.3 years old children and adult | Private brain and pelvic CT and MRI datasets by St Jude Children’s Research Hospital, USA | Total: 141 CT and MRI scans; training: 136; testing: 5 | No | Pelvic MRI image translation to CT for radiation therapy planning | No | NR | No | Real CT images acquired from same patients | No | Mean SSIM: 0.93 and 0.93; MAE: 52.4 and 85.4 HU; ME: −3.4 and −6.6 HU; PSNR: 30.6 and 29.2 dB for CT images generated from T1w and T2w MRI images by CycleGAN, respectively |
Wang et al. (2022)-USA [111] | MRI | CycleGAN | Retrospective | 1.1–20.3 (median: 9.0) years old children and adult | Private brain CT and MRI datasets by St. Jude Children’s Research Hospital, USA | Total: 195 CT and MRI scans-training: 150; testing: 45 | No | Brain MRI image translation to CT and RPSP images for radiation therapy planning | No | NR | No | Real CT images acquired from same patients | No | SSIM: 0.92 and 0.91; DSC of air/bone: 0.98/0.83 and 0.97/0.85 MAE: 44.1 and 42.4 HU; ME: 8.6 and 18.8 HU; PSNR: 32.6 and 31.5 dB for CT images generated from T1w and T2w MRI images by CycleGAN, respectively |
Zhao et al. (2019)-China and USA [112] | MRI | CycleGAN | Retrospective | 0–2 years old children | Public brain MRI dataset (UNC/UMN Baby Connectome Project) | Total: 360 scans-training: 252; testing: 108 | No | Image translation (for domain adaptation) | No | NR | No | Original MRI images | No | 14.1% PSNR improvement (non-DL: 29.00 dB; CycleGAN: 33.09 dB) and 33.9% MAE reduction (non-DL: 0.124; CycleGAN: 0.082) for domain adaptation |
Other | ||||||||||||||
Mostapha et al. (2019)-USA [113] | MRI | 3D DCGAN | Retrospective | 1–6-year-old children | Public brain MRI datasets (UNC/UMN Baby Connectome Project and UNC Early Brain Development Study) | Total: 2187 scans | No | Automatic brain MRI image quality assessment | No | 80:20 random split | No | Manual image quality assessment by MRI experts | No | 92.9% sensitivity (VAE: 0.42; DCGAN: 0.81), 2.2% specificity (VAE: 0.93; DCGAN: 0.95), and 47.6% accuracy (VAE: 0.63; DCGAN: 0.93) improvements for automatic image quality assessment, respectively |
4. Discussion
5. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Wolterink, J.M.; Mukhopadhyay, A.; Leiner, T.; Vogl, T.J.; Bucher, A.M.; Išgum, I. Generative Adversarial Networks: A Primer for Radiologists. RadioGraphics 2021, 41, 840–857. [Google Scholar] [CrossRef] [PubMed]
- Pesapane, F.; Codari, M.; Sardanelli, F. Artificial intelligence in medical imaging: Threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur. Radiol. Exp. 2018, 2, 35. [Google Scholar] [CrossRef] [Green Version]
- Ng, C.K.C.; Sun, Z.; Jansen, S. Comparison of performance of micro-computed tomography (Micro-CT) and synchrotron radiation CT in assessing coronary stenosis caused by calcified plaques in coronary artery phantoms. J. Vasc. Dis. 2023, in press. [Google Scholar]
- Parczewski, M.; Kufel, J.; Aksak-Wąs, B.; Piwnik, J.; Chober, D.; Puzio, T.; Lesiewska, L.; Białkowski, S.; Rafalska-Kosior, M.; Wydra, J.; et al. Artificial neural network based prediction of the lung tissue involvement as an independent in-hospital mortality and mechanical ventilation risk factor in COVID-19. J. Med. Virol. 2023, 95, e28787. [Google Scholar] [CrossRef] [PubMed]
- Althaqafi, T.; Al-Ghamdi, A.S.A.-M.; Ragab, M. Artificial Intelligence Based COVID-19 Detection and Classification Model on Chest X-ray Images. Healthcare 2023, 11, 1204. [Google Scholar] [CrossRef]
- Alsharif, W.; Qurashi, A. Effectiveness of COVID-19 diagnosis and management tools: A review. Radiography 2021, 27, 682–687. [Google Scholar] [CrossRef] [PubMed]
- Alzubaidi, M.; Zubaydi, H.D.; Bin-Salem, A.A.; Abd-Alrazaq, A.A.; Ahmed, A.; Househ, M. Role of deep learning in early detection of COVID-19: Scoping review. Comput. Methods Programs Biomed. Updat. 2021, 1, 100025. [Google Scholar] [CrossRef]
- Kufel, J.; Bargieł, K.; Koźlik, M.; Czogalik, Ł.; Dudek, P.; Jaworski, A.; Cebula, M.; Gruszczyńska, K. Application of artificial intelligence in diagnosing COVID-19 disease symptoms on chest X-rays: A systematic review. Int. J. Med. Sci. 2022, 19, 1743–1752. [Google Scholar] [CrossRef]
- Pang, S.; Wang, S.; Rodríguez-Patón, A.; Li, P.; Wang, X. An artificial intelligent diagnostic system on mobile Android terminals for cholelithiasis by lightweight convolutional neural network. PLoS ONE 2019, 14, e0221720. [Google Scholar] [CrossRef] [Green Version]
- Kufel, J.; Bargieł, K.; Koźlik, M.; Czogalik, Ł.; Dudek, P.; Jaworski, A.; Magiera, M.; Bartnikowska, W.; Cebula, M.; Nawrat, Z.; et al. Usability of Mobile Solutions Intended for Diagnostic Images—A Systematic Review. Healthcare 2022, 10, 2040. [Google Scholar] [CrossRef]
- Verma, A.; Amin, S.B.; Naeem, M.; Saha, M. Detecting COVID-19 from chest computed tomography scans using AI-driven android application. Comput. Biol. Med. 2022, 143, 105298. [Google Scholar] [CrossRef]
- Patel, B.; Makaryus, A.N. Artificial Intelligence Advances in the World of Cardiovascular Imaging. Healthcare 2022, 10, 154. [Google Scholar] [CrossRef] [PubMed]
- Brady, S.L.; Trout, A.T.; Somasundaram, E.; Anton, C.G.; Li, Y.; Dillman, J.R. Improving Image Quality and Reducing Radiation Dose for Pediatric CT by Using Deep Learning Reconstruction. Radiology 2021, 298, 180–188. [Google Scholar] [CrossRef] [PubMed]
- Jeon, P.-H.; Kim, D.; Chung, M.-A. Estimates of the image quality in accordance with radiation dose for pediatric imaging using deep learning CT: A phantom study. In Proceedings of the 2022 IEEE International Conference on Big Data and Smart Computing (BigComp), Daegu, Republic of Korea, 17–22 January 2022; pp. 352–356. [Google Scholar] [CrossRef]
- Krueger, P.-C.; Ebeling, K.; Waginger, M.; Glutig, K.; Scheithauer, M.; Schlattmann, P.; Proquitté, H.; Mentzel, H.-J. Evaluation of the post-processing algorithms SimGrid and S-Enhance for paediatric intensive care patients and neonates. Pediatr. Radiol. 2022, 52, 1029–1037. [Google Scholar] [CrossRef]
- Lee, S.; Choi, Y.H.; Cho, Y.J.; Lee, S.B.; Cheon, J.-E.; Kim, W.S.; Ahn, C.K.; Kim, J.H. Noise reduction approach in pediatric abdominal CT combining deep learning and dual-energy technique. Eur. Radiol. 2021, 31, 2218–2226. [Google Scholar] [CrossRef]
- Nagayama, Y.; Goto, M.; Sakabe, D.; Emoto, T.; Shigematsu, S.; Oda, S.; Tanoue, S.; Kidoh, M.; Nakaura, T.; Funama, Y.; et al. Radiation Dose Reduction for 80-kVp Pediatric CT Using Deep Learning-Based Reconstruction: A Clinical and Phantom Study. AJR Am. J. Roentgenol. 2022, 219, 315–324. [Google Scholar] [CrossRef]
- Sun, J.; Li, H.; Li, H.; Li, M.; Gao, Y.; Zhou, Z.; Peng, Y. Application of deep learning image reconstruction algorithm to improve image quality in CT angiography of children with Takayasu arteritis. J. X-ray Sci. Technol. 2022, 30, 177–184. [Google Scholar] [CrossRef] [PubMed]
- Sun, J.; Li, H.; Li, J.; Yu, T.; Li, M.; Zhou, Z.; Peng, Y. Improving the image quality of pediatric chest CT angiography with low radiation dose and contrast volume using deep learning image reconstruction. Quant. Imaging Med. Surg. 2021, 11, 3051–3058. [Google Scholar] [CrossRef]
- Sun, J.; Li, H.; Li, J.; Cao, Y.; Zhou, Z.; Li, M.; Peng, Y. Performance evaluation of using shorter contrast injection and 70 kVp with deep learning image reconstruction for reduced contrast medium dose and radiation dose in coronary CT angiography for children: A pilot study. Quant. Imaging Med. Surg. 2021, 11, 4162–4171. [Google Scholar] [CrossRef]
- Sun, J.; Li, H.; Gao, J.; Li, J.; Li, M.; Zhou, Z.; Peng, Y. Performance evaluation of a deep learning image reconstruction (DLIR) algorithm in “double low” chest CTA in children: A feasibility study. Radiol. Med. 2021, 126, 1181–1188. [Google Scholar] [CrossRef]
- Sun, J.; Li, H.; Wang, B.; Li, J.; Li, M.; Zhou, Z.; Peng, Y. Application of a deep learning image reconstruction (DLIR) algorithm in head CT imaging for children to improve image quality and lesion detection. BMC Med. Imaging 2021, 21, 108. [Google Scholar] [CrossRef] [PubMed]
- Theruvath, A.J.; Siedek, F.; Yerneni, K.; Muehe, A.M.; Spunt, S.L.; Pribnow, A.; Moseley, M.; Lu, Y.; Zhao, Q.; Gulaka, P.; et al. Validation of Deep Learning-based Augmentation for Reduced 18F-FDG Dose for PET/MRI in Children and Young Adults with Lymphoma. Radiol. Artif. Intell. 2021, 3, e200232. [Google Scholar] [CrossRef]
- Yoon, H.; Kim, J.; Lim, H.J.; Lee, M.-J. Image quality assessment of pediatric chest and abdomen CT by deep learning reconstruction. BMC Med. Imaging 2021, 21, 146. [Google Scholar] [CrossRef]
- Zhang, K.; Shi, X.; Xie, S.-S.; Sun, J.-H.; Liu, Z.-H.; Zhang, S.; Song, J.-Y.; Shen, W. Deep learning image reconstruction in pediatric abdominal and chest computed tomography: A comparison of image quality and radiation dose. Quant. Imaging Med. Surg. 2022, 12, 3238–3250. [Google Scholar] [CrossRef] [PubMed]
- Ng, C.K.C. Artificial Intelligence for Radiation Dose Optimization in Pediatric Radiology: A Systematic Review. Children 2022, 9, 1044. [Google Scholar] [CrossRef] [PubMed]
- Helm, E.J.; Silva, C.T.; Roberts, H.C.; Manson, D.; Seed, M.T.M.; Amaral, J.G.; Babyn, P.S. Computer-aided detection for the identification of pulmonary nodules in pediatric oncology patients: Initial experience. Pediatr. Radiol. 2009, 39, 685–693. [Google Scholar] [CrossRef] [PubMed]
- Schalekamp, S.; Klein, W.M.; van Leeuwen, K.G. Current and emerging artificial intelligence applications in chest imaging: A pediatric perspective. Pediatr. Radiol. 2022, 52, 2120–2130. [Google Scholar] [CrossRef]
- Ng, C.K.C. Diagnostic Performance of Artificial Intelligence-Based Computer-Aided Detection and Diagnosis in Pediatric Radiology: A Systematic Review. Children 2023, 10, 525. [Google Scholar] [CrossRef]
- Nam, J.G.; Park, S.; Hwang, E.J.; Lee, J.H.; Jin, K.-N.; Lim, K.Y.; Vu, T.H.; Sohn, J.H.; Hwang, S.; Goo, J.M.; et al. Development and Validation of Deep Learning-based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs. Radiology 2019, 290, 218–228. [Google Scholar] [CrossRef] [Green Version]
- Hwang, E.J.; Park, S.; Jin, K.-N.; Kim, J.I.; Choi, S.Y.; Lee, J.H.; Goo, J.M.; Aum, J.; Yim, J.-J.; Park, C.M.; et al. Development and Validation of a Deep Learning-based Automatic Detection Algorithm for Active Pulmonary Tuberculosis on Chest Radiographs. Clin. Infect. Dis. 2019, 69, 739–747. [Google Scholar] [CrossRef] [Green Version]
- Hwang, E.J.; Park, S.; Jin, K.-N.; Kim, J.I.; Choi, S.Y.; Lee, J.H.; Goo, J.M.; Aum, J.; Yim, J.-J.; Cohen, J.G.; et al. Development and Validation of a Deep Learning-Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs. JAMA Netw. Open 2019, 2, e191095. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Liang, C.-H.; Liu, Y.-C.; Wu, M.-T.; Garcia-Castro, F.; Alberich-Bayarri, A.; Wu, F.-Z. Identifying pulmonary nodules or masses on chest radiography using deep learning: External validation and strategies to improve clinical practice. Clin. Radiol. 2020, 75, 38–45. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Singh, R.; Kalra, M.K.; Nitiwarangkul, C.; Patti, J.A.; Homayounieh, F.; Padole, A.; Rao, P.; Putha, P.; Muse, V.V.; Sharma, A.; et al. Deep learning in chest radiography: Detection of findings and presence of change. PLoS ONE 2018, 13, e0204155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mushtaq, J.; Pennella, R.; Lavalle, S.; Colarieti, A.; Steidler, S.; Martinenghi, C.M.A.; Palumbo, D.; Esposito, A.; Rovere-Querini, P.; Tresoldi, M.; et al. Initial chest radiographs and artificial intelligence (AI) predict clinical outcomes in COVID-19 patients: Analysis of 697 Italian patients. Eur. Radiol. 2021, 31, 1770–1779. [Google Scholar] [CrossRef]
- Qin, Z.Z.; Sander, M.S.; Rai, B.; Titahong, C.N.; Sudrungrot, S.; Laah, S.N.; Adhikari, L.M.; Carter, E.J.; Puri, L.; Codlin, A.J.; et al. Using artificial intelligence to read chest radiographs for tuberculosis detection: A multi-site evaluation of the diagnostic accuracy of three deep learning systems. Sci. Rep. 2019, 9, 15000. [Google Scholar] [CrossRef] [Green Version]
- Dellios, N.; Teichgraeber, U.; Chelaru, R.; Malich, A.; Papageorgiou, I.E. Computer-aided Detection Fidelity of Pulmonary Nodules in Chest Radiograph. J. Clin. Imaging Sci. 2017, 7, 8. [Google Scholar] [CrossRef]
- Kligerman, S.; Cai, L.P.; White, C.S. The Effect of Computer-aided Detection on Radiologist Performance in the Detection of Lung Cancers Previously Missed on a Chest Radiograph. J. Thorac. Imaging 2013, 28, 244–252. [Google Scholar] [CrossRef]
- Schalekamp, S.; van Ginneken, B.; Koedam, E.; Snoeren, M.M.; Tiehuis, A.M.; Wittenberg, R.; Karssemeijer, N.; Schaefer-Prokop, C.M. Computer-aided Detection Improves Detection of Pulmonary Nodules in Chest Radiographs beyond the Support by Bone-suppressed Images. Radiology 2014, 272, 252–261. [Google Scholar] [CrossRef] [Green Version]
- Sim, Y.; Chung, M.J.; Kotter, E.; Yune, S.; Kim, M.; Do, S.; Han, K.; Kim, H.; Yang, S.; Lee, D.-J.; et al. Deep Convolutional Neural Network-based Software Improves Radiologist Detection of Malignant Lung Nodules on Chest Radiographs. Radiology 2020, 294, 199–209. [Google Scholar] [CrossRef]
- Murphy, K.; Habib, S.S.; Zaidi, S.M.A.; Khowaja, S.; Khan, A.; Melendez, J.; Scholten, E.T.; Amad, F.; Schalekamp, S.; Verhagen, M.; et al. Computer aided detection of tuberculosis on chest radiographs: An evaluation of the CAD4TB v6 system. Sci. Rep. 2020, 10, 5492. [Google Scholar] [CrossRef] [Green Version]
- Murphy, K.; Smits, H.; Knoops, A.J.G.; Korst, M.B.J.M.; Samson, T.; Scholten, E.T.; Schalekamp, S.; Schaefer-Prokop, C.M.; Philipsen, R.H.H.M.; Meijers, A.; et al. COVID-19 on Chest Radiographs: A Multireader Evaluation of an Artificial Intelligence System. Radiology 2020, 296, E166–E172. [Google Scholar] [CrossRef] [PubMed]
- Park, S.; Lee, S.M.; Lee, K.H.; Jung, K.-H.; Bae, W.; Choe, J.; Seo, J.B. Deep learning-based detection system for multiclass lesions on chest radiographs: Comparison with observer readings. Eur. Radiol. 2020, 30, 1359–1368. [Google Scholar] [CrossRef] [PubMed]
- Jacobs, C.; van Rikxoort, E.M.; Murphy, K.; Prokop, M.; Schaefer-Prokop, C.M.; van Ginneken, B. Computer-aided detection of pulmonary nodules: A comparative study using the public LIDC/IDRI database. Eur. Radiol. 2016, 26, 2139–2147. [Google Scholar] [CrossRef] [PubMed]
- Setio, A.A.A.; Traverso, A.; de Bel, T.; Berens, M.S.; Bogaard, C.v.D.; Cerello, P.; Chen, H.; Dou, Q.; Fantacci, M.E.; Geurts, B.; et al. Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge. Med. Image Anal. 2017, 42, 1–13. [Google Scholar] [CrossRef] [Green Version]
- Lo, S.B.; Freedman, M.T.; Gillis, L.B.; White, C.S.; Mun, S.K. Journal Club: Computer-Aided Detection of Lung Nodules on CT with a Computerized Pulmonary Vessel Suppressed Function. AJR Am. J. Roentgenol. 2018, 210, 480–488. [Google Scholar] [CrossRef]
- Wagner, A.-K.; Hapich, A.; Psychogios, M.N.; Teichgräber, U.; Malich, A.; Papageorgiou, I. Computer-Aided Detection of Pulmonary Nodules in Computed Tomography Using ClearReadCT. J. Med. Syst. 2019, 43, 58. [Google Scholar] [CrossRef]
- Scholten, E.T.; Jacobs, C.; van Ginneken, B.; van Riel, S.; Vliegenthart, R.; Oudkerk, M.; de Koning, H.J.; Horeweg, N.; Prokop, M.; Gietema, H.A.; et al. Detection and quantification of the solid component in pulmonary subsolid nodules by semiautomatic segmentation. Eur. Radiol. 2015, 25, 488–496. [Google Scholar] [CrossRef] [Green Version]
- Fischer, A.M.; Varga-Szemes, A.; Martin, S.S.; Sperl, J.I.; Sahbaee, P.; Neumann, D.M.; Gawlitza, J.; Henzler, T.; Johnson, C.M.B.; Nance, J.W.; et al. Artificial Intelligence-based Fully Automated Per Lobe Segmentation and Emphysema-quantification Based on Chest Computed Tomography Compared with Global Initiative for Chronic Obstructive Lung Disease Severity of Smokers. J. Thorac. Imaging 2020, 35, S28–S34. [Google Scholar] [CrossRef]
- Ng, C.K.C.; Leung, V.W.S.; Hung, R.H.M. Clinical Evaluation of Deep Learning and Atlas-Based Auto-Contouring for Head and Neck Radiation Therapy. Appl. Sci. 2022, 12, 11681. [Google Scholar] [CrossRef]
- Wang, J.; Chen, Z.; Yang, C.; Qu, B.; Ma, L.; Fan, W.; Zhou, Q.; Zheng, Q.; Xu, S. Evaluation Exploration of Atlas-Based and Deep Learning-Based Automatic Contouring for Nasopharyngeal Carcinoma. Front. Oncol. 2022, 12, 833816. [Google Scholar] [CrossRef]
- Brunenberg, E.J.; Steinseifer, I.K.; Bosch, S.v.D.; Kaanders, J.H.; Brouwer, C.L.; Gooding, M.J.; van Elmpt, W.; Monshouwer, R. External validation of deep learning-based contouring of head and neck organs at risk. Phys. Imaging Radiat. Oncol. 2020, 15, 8–15. [Google Scholar] [CrossRef] [PubMed]
- Chen, W.; Li, Y.; Dyer, B.A.; Feng, X.; Rao, S.; Benedict, S.H.; Chen, Q.; Rong, Y. Deep learning vs. atlas-based models for fast auto-segmentation of the masticatory muscles on head and neck CT images. Radiat. Oncol. 2020, 15, 176. [Google Scholar] [CrossRef] [PubMed]
- Arora, A.; Arora, A. Generative adversarial networks and synthetic patient data: Current challenges and future perspectives. Futur. Health J. 2022, 9, 190–193. [Google Scholar] [CrossRef] [PubMed]
- Ali, H.; Biswas, R.; Mohsen, F.; Shah, U.; Alamgir, A.; Mousa, O.; Shah, Z. The role of generative adversarial networks in brain MRI: A scoping review. Insights Into Imaging 2022, 13, 98. [Google Scholar] [CrossRef] [PubMed]
- Laino, M.E.; Cancian, P.; Politi, L.S.; Della Porta, M.G.; Saba, L.; Savevski, V. Generative Adversarial Networks in Brain Imaging: A Narrative Review. J. Imaging 2022, 8, 83. [Google Scholar] [CrossRef] [PubMed]
- Ali, H.; Shah, Z. Combating COVID-19 Using Generative Adversarial Networks and Artificial Intelligence for Medical Images: Scoping Review. JMIR Med. Inform. 2022, 10, e37365. [Google Scholar] [CrossRef]
- Vey, B.L.; Gichoya, J.W.; Prater, A.; Hawkins, C.M. The Role of Generative Adversarial Networks in Radiation Reduction and Artifact Correction in Medical Imaging. J. Am. Coll. Radiol. 2019, 16, 1273–1278. [Google Scholar] [CrossRef]
- Koshino, K.; Werner, R.A.; Pomper, M.G.; Bundschuh, R.A.; Toriumi, F.; Higuchi, T.; Rowe, S.P. Narrative review of generative adversarial networks in medical and molecular imaging. Ann. Transl. Med. 2021, 9, 821. [Google Scholar] [CrossRef]
- Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef] [Green Version]
- Apostolopoulos, I.D.; Papathanasiou, N.D.; Apostolopoulos, D.J.; Panayiotakis, G.S. Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 3717–3739. [Google Scholar] [CrossRef]
- Jeong, J.J.; Tariq, A.; Adejumo, T.; Trivedi, H.; Gichoya, J.W.; Banerjee, I. Systematic Review of Generative Adversarial Networks (GANs) for Medical Image Classification and Segmentation. J. Digit. Imaging 2022, 35, 137–152. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar] [CrossRef]
- Sun, Z.; Ng, C.K.C. Artificial Intelligence (Enhanced Super-Resolution Generative Adversarial Network) for Calcium Deblooming in Coronary Computed Tomography Angiography: A Feasibility Study. Diagnostics 2022, 12, 991. [Google Scholar] [CrossRef]
- Sun, Z.; Ng, C.K.C. Finetuned Super-Resolution Generative Adversarial Network (Artificial Intelligence) Model for Calcium Deblooming in Coronary Computed Tomography Angiography. J. Pers. Med. 2022, 12, 1354. [Google Scholar] [CrossRef] [PubMed]
- Al Mahrooqi, K.M.S.; Ng, C.K.C.; Sun, Z. Pediatric Computed Tomography Dose Optimization Strategies: A Literature Review. J. Med. Imaging Radiat. Sci. 2015, 46, 241–249. [Google Scholar] [CrossRef] [Green Version]
- Davendralingam, N.; Sebire, N.J.; Arthurs, O.J.; Shelmerdine, S.C. Artificial intelligence in paediatric radiology: Future opportunities. Br. J. Radiol. 2021, 94, 20200975. [Google Scholar] [CrossRef] [PubMed]
- Tuysuzoglu, A.; Tan, J.; Eissa, K.; Kiraly, A.P.; Diallo, M.; Kamen, A. Deep Adversarial Context-Aware Landmark Detection for Ultrasound Imaging. In Proceedings of the 21st International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2018), Granada, Spain, 16–20 September 2018; pp. 151–158. [Google Scholar] [CrossRef] [Green Version]
- PRISMA: Transparent Reporting of Systematic Reviews and Meta-Analyses. Available online: http://www.prisma-statement.org/ (accessed on 26 June 2023).
- Waffenschmidt, S.; Knelangen, M.; Sieben, W.; Bühn, S.; Pieper, D. Single screening versus conventional double screening for study selection in systematic reviews: A methodological systematic review. BMC Med. Res. Methodol. 2019, 19, 132. [Google Scholar] [CrossRef]
- Ng, C.K.C. A review of the impact of the COVID-19 pandemic on pre-registration medical radiation science education. Radiography 2021, 28, 222–231. [Google Scholar] [CrossRef]
- Xu, L.; Gao, J.; Wang, Q.; Yin, J.; Yu, P.; Bai, B.; Pei, R.; Chen, D.; Yang, G.; Wang, S.; et al. Computer-Aided Diagnosis Systems in Diagnosing Malignant Thyroid Nodules on Ultrasonography: A Systematic Review and Meta-Analysis. Eur. Thyroid. J. 2020, 9, 186–193. [Google Scholar] [CrossRef]
- Aggarwal, R.; Sounderajah, V.; Martin, G.; Ting, D.S.W.; Karthikesalingam, A.; King, D.; Ashrafian, H.; Darzi, A. Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. NPJ Digit. Med. 2021, 4, 65. [Google Scholar] [CrossRef]
- Vasey, B.; Ursprung, S.; Beddoe, B.; Taylor, E.H.; Marlow, N.; Bilbro, N.; Watkinson, P.; McCulloch, P. Association of Clinician Diagnostic Performance with Machine Learning-Based Decision Support Systems: A Systematic Review. JAMA Netw. Open 2021, 4, e211276. [Google Scholar] [CrossRef]
- Imrey, P.B. Limitations of Meta-analyses of Studies with High Heterogeneity. JAMA Netw. Open 2020, 3, e1919325. [Google Scholar] [CrossRef] [Green Version]
- Sirriyeh, R.; Lawton, R.; Gardner, P.; Armitage, G. Reviewing studies with diverse designs: The development and evaluation of a new tool. J. Eval. Clin. Pract. 2012, 18, 746–752. [Google Scholar] [CrossRef] [PubMed]
- Devika, K.; Mahapatra, D.; Subramanian, R.; Oruganti, V.R.M. Outlier-Based Autism Detection Using Longitudinal Structural MRI. IEEE Access 2022, 10, 27794–27808. [Google Scholar] [CrossRef]
- Kuttala, D.; Mahapatra, D.; Subramanian, R.; Oruganti, V.R.M. Dense attentive GAN-based one-class model for detection of autism and ADHD. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 10444–10458. [Google Scholar] [CrossRef]
- Motamed, S.; Khalvati, F. Inception-GAN for Semi-supervised Detection of Pneumonia in Chest X-rays. In Proceedings of the 43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2021), Jalisco, Mexico, 1–5 November 2021; pp. 3774–3778. [Google Scholar] [CrossRef]
- Dittimi, T.V.; Suen, C.Y. Single Image Super-Resolution for Medical Image Applications. In Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence 2020 (ICPRAI 2020), Zhongshan, China, 19–23 October 2020; pp. 660–666. [Google Scholar] [CrossRef]
- Fu, Y.; Dong, S.; Liao, Y.; Xue, L.; Xu, Y.; Li, F.; Yang, Q.; Yu, T.; Tian, M.; Zhuo, C. A Resource-Efficient Deep Learning Framework for Low-Dose Brain Pet Image Reconstruction and Analysis. In Proceedings of the 19th IEEE International Symposium on Biomedical Imaging (ISBI 2022), Kolkata, India, 28–31 March 2022; pp. 1–5. [Google Scholar] [CrossRef]
- Park, H.S.; Jeon, K.; Lee, J.; You, S.K. Denoising of pediatric low dose abdominal CT using deep learning based algorithm. PLoS ONE 2022, 17, e0260369. [Google Scholar] [CrossRef]
- Pham, C.-H.; Tor-Diez, C.; Meunier, H.; Bednarek, N.; Fablet, R.; Passat, N.; Rousseau, F. Simultaneous Super-Resolution and Segmentation Using a Generative Adversarial Network: Application to Neonatal Brain MRI. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 991–994. [Google Scholar] [CrossRef]
- Decourt, C.; Duong, L. Semi-supervised generative adversarial networks for the segmentation of the left ventricle in pediatric MRI. Comput. Biol. Med. 2020, 123, 103884. [Google Scholar] [CrossRef]
- Guo, L.; Hu, Y.; Lei, B.; Du, J.; Mao, M.; Jin, Z.; Xia, B.; Wang, T. Dual Network Generative Adversarial Networks for Pediatric Echocardiography Segmentation. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019), Shenzhen, China, 13–17 October 2019; pp. 113–122. [Google Scholar] [CrossRef]
- Kan, C.N.E.; Gilat-Schmidt, T.; Ye, D.H. Enhancing reproductive organ segmentation in pediatric CT via adversarial learning. In Proceedings of the International Society for Optics and Photonics Medical Imaging 2021 (SPIE Medical Imaging 2021), San Diego, CA, USA, 15–20 February 2021; pp. 1–6. [Google Scholar] [CrossRef]
- Karimi-Bidhendi, S.; Arafati, A.; Cheng, A.L.; Wu, Y.; Kheradvar, A.; Jafarkhani, H. Fully-automated deep-learning segmentation of pediatric cardiovascular magnetic resonance of patients with complex congenital heart diseases. J. Cardiovasc. Magn. Reson. 2020, 22, 80. [Google Scholar] [CrossRef]
- Zhou, Y.; Rakkunedeth, A.; Keen, C.; Knight, J.; Jaremko, J.L. Wrist Ultrasound Segmentation by Deep Learning. In Proceedings of the 20th International Conference on Artificial Intelligence in Medicine (AIME 2022), Halifax, NS, Canada, 14–17 June 2022; pp. 230–237. [Google Scholar] [CrossRef]
- Banerjee, T.; Batta, D.; Jain, A.; Karthikeyan, S.; Mehndiratta, H.; Kishan, K.H. Deep Belief Convolutional Neural Network with Artificial Image Creation by GANs Based Diagnosis of Pneumonia in Radiological Samples of the Pectoralis Major. In Proceedings of the 8th International Conference on Electrical and Electronics Engineering (ICEEE 2021), New Delhi, India, 9–11 April 2021; pp. 979–1002. [Google Scholar] [CrossRef]
- Diller, G.-P.; Vahle, J.; Radke, R.; Vidal, M.L.B.; Fischer, A.J.; Bauer, U.M.M.; Sarikouch, S.; Berger, F.; Beerbaum, P.; Baumgartner, H.; et al. Utility of deep learning networks for the generation of artificial cardiac magnetic resonance images in congenital heart disease. BMC Med. Imaging 2020, 20, 113. [Google Scholar] [CrossRef]
- Guo, Z.; Zheng, L.; Ye, L.; Pan, S.; Yan, T. Data Augmentation Using Auxiliary Classifier Generative Adversarial Networks. In Proceedings of the 17th Chinese Intelligent Systems Conference, Fuzhou, China, 16–17 October 2021; pp. 790–800. [Google Scholar] [CrossRef]
- Guo, Z.-Z.; Zheng, L.-X.; Huang, D.-T.; Yan, T.; Su, Q.-L. RS-FFGAN: Generative adversarial network based on real sample feature fusion for pediatric CXR image data enhancement. J. Radiat. Res. Appl. Sci. 2022, 15, 100461. [Google Scholar] [CrossRef]
- Kan, C.N.E.; Maheenaboobacker, N.; Ye, D.H. Age-Conditioned Synthesis of Pediatric Computed Tomography with Auxiliary Classifier Generative Adversarial Networks. In Proceedings of the 17th IEEE International Symposium on Biomedical Imaging (ISBI 2020), Iowa City, IA, USA, 3–7 April 2020; pp. 109–112. [Google Scholar] [CrossRef]
- Khalifa, N.E.M.; Taha, M.H.N.; Hassanien, A.E.; Elghamrawy, S. Detection of Coronavirus (COVID-19) Associated Pneumonia Based on Generative Adversarial Networks and a Fine-Tuned Deep Transfer Learning Model Using Chest X-ray Dataset. In Proceedings of the 8th International Conference on Advanced Intelligent Systems and Informatics 2022 (AISI 2022), Cairo, Egypt, 20–22 November 2022; pp. 234–247. [Google Scholar] [CrossRef]
- Venu, S.K. Improving the Generalization of Deep Learning Classification Models in Medical Imaging Using Transfer Learning and Generative Adversarial Networks. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence (ICAART 2021), Setúbal, Portugal, 4–6 February 2021; pp. 218–235. [Google Scholar] [CrossRef]
- Li, X.; Ke, Y. Privacy Preserving and Communication Efficient Information Enhancement for Imbalanced Medical Image Classification. In Proceedings of the Medical Image Understanding and Analysis—26th Annual Conference (MIUA 2022), Cambridge, UK, 27–29 July 2022; pp. 663–679. [Google Scholar] [CrossRef]
- Prince, E.W.; Whelan, R.; Mirsky, D.M.; Stence, N.; Staulcup, S.; Klimo, P.; Anderson, R.C.E.; Niazi, T.N.; Grant, G.; Souweidane, M.; et al. Robust deep learning classification of adamantinomatous craniopharyngioma from limited preoperative radiographic images. Sci. Rep. 2020, 10, 16885. [Google Scholar] [CrossRef]
- Su, L.; Fu, X.; Hu, Q. Generative adversarial network based data augmentation and gender-last training strategy with application to bone age assessment. Comput. Methods Programs Biomed. 2021, 212, 106456. [Google Scholar] [CrossRef] [PubMed]
- Szepesi, P.; Szilágyi, L. Detection of pneumonia using convolutional neural networks and deep learning. Biocybern. Biomed. Eng. 2022, 42, 1012–1022. [Google Scholar] [CrossRef]
- Vetrimani, E.; Arulselvi, M.; Ramesh, G. Building convolutional neural network parameters using genetic algorithm for the croup cough classification problem. Meas. Sens. 2023, 27, 100717. [Google Scholar] [CrossRef]
- Chen, J.; Sun, Y.; Fang, Z.; Lin, W.; Li, G.; Wang, L.; UNC UMN Baby Connectome Project Consortium. Harmonized neonatal brain MR image segmentation model for cross-site datasets. Biomed. Signal Process. Control 2021, 69, 102810. [Google Scholar] [CrossRef] [PubMed]
- Hržić, F.; Žužić, I.; Tschauner, S.; Štajduhar, I. Cast suppression in radiographs by generative adversarial networks. J. Am. Med. Inform. Assoc. 2021, 28, 2687–2694. [Google Scholar] [CrossRef]
- Kaplan, S.; Perrone, A.; Alexopoulos, D.; Kenley, J.K.; Barch, D.M.; Buss, C.; Elison, J.T.; Graham, A.M.; Neil, J.J.; O’Connor, T.G.; et al. Synthesizing pseudo-T2w images to recapture missing data in neonatal neuroimaging with applications in rs-fMRI. Neuroimage 2022, 253, 119091. [Google Scholar] [CrossRef]
- Khalili, N.; Turk, E.; Zreik, M.; Viergever, M.A.; Benders, M.J.N.L.; Išgum, I. Generative Adversarial Network for Segmentation of Motion Affected Neonatal Brain MRI. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019), Shenzhen, China, 13–17 October 2019; pp. 320–328. [Google Scholar] [CrossRef] [Green Version]
- Maspero, M.; Bentvelzen, L.G.; Savenije, M.H.; Guerreiro, F.; Seravalli, E.; Janssens, G.O.; Berg, C.A.v.D.; Philippens, M.E. Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy. Radiother. Oncol. 2020, 153, 197–204. [Google Scholar] [CrossRef]
- Peng, L.; Lin, L.; Lin, Y.; Zhang, Y.; Vlasova, R.M.; Prieto, J.; Chen, Y.-W.; Gerig, G.; Styner, M. Multi-modal Perceptual Adversarial Learning for Longitudinal Prediction of Infant MR Images. In Proceedings of the First International Workshop on Advances in Simplifying Medical UltraSound (ASMUS 2020) and the 5th International Workshop on Perinatal, Preterm and Paediatric Image Analysis (PIPPI 2020), Lima, Peru, 4–8 October 2020; pp. 284–294. [Google Scholar] [CrossRef]
- Tang, Y.; Tang, Y.; Sandfort, V.; Xiao, J.; Summers, R.M. TUNA-Net: Task-Oriented Unsupervised Adversarial Network for Disease Recognition in Cross-domain Chest X-rays. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019), Shenzhen, China, 13–17 October 2019; pp. 431–440. [Google Scholar] [CrossRef] [Green Version]
- Tor-Diez, C.; Porras, A.R.; Packer, R.J.; Avery, R.A.; Linguraru, M.G. Unsupervised MRI Homogenization: Application to Pediatric Anterior Visual Pathway Segmentation. In Proceedings of the 11th International Workshop on Machine Learning in Medical Imaging (MLMI 2020), Lima, Peru, 4 October 2020; pp. 180–188. [Google Scholar] [CrossRef]
- Wang, C.; Uh, J.; Merchant, D.T.E.; Hua, C.-H.; Acharya, S. Facilitating MR-Guided Adaptive Proton Therapy in Children Using Deep Learning-Based Synthetic CT. Int. J. Part. Ther. 2021, 8, 11–20. [Google Scholar] [CrossRef]
- Wang, C.; Uh, J.; He, X.; Hua, C.-H.; Sahaja, A. Transfer learning-based synthetic CT generation for MR-only proton therapy planning in children with pelvic sarcomas. In Proceedings of the Medical Imaging 2021: Physics of Medical Imaging, San Diego, CA, USA, 15–20 February 2021. [Google Scholar] [CrossRef]
- Wang, C.; Uh, J.; Patni, T.; Do, T.M.; Li, Y.; Hua, C.; Acharya, S. Toward MR-only proton therapy planning for pediatric brain tumors: Synthesis of relative proton stopping power images with multiple sequence MRI and development of an online quality assurance tool. Med. Phys. 2022, 49, 1559–1570. [Google Scholar] [CrossRef]
- Zhao, F.; Wu, Z.; Wang, L.; Lin, W.; Xia, S.; Shen, D.; Li, G.; UNC/UMN Baby Connectome Project Consortium. Harmonization of infant cortical thickness using surface-to-surface cycle-consistent adversarial networks. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019), Shenzhen, China, 13–17 October 2019; pp. 475–483. [Google Scholar] [CrossRef]
- Mostapha, M.; Prieto, J.; Murphy, V.; Girault, J.; Foster, M.; Rumple, A.; Blocher, J.; Lin, W.; Elison, J.; Gilmore, J.; et al. Semi-supervised VAE-GAN for Out-of-Sample Detection Applied to MRI Quality Control. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2019), Shenzhen, China, 13–17 October 2019; pp. 127–136. [Google Scholar] [CrossRef]
- Pediatric X-ray Imaging. Available online: https://www.fda.gov/radiation-emitting-products/medical-imaging/pediatric-x-ray-imaging (accessed on 20 July 2023).
- Kozak, B.M.; Jaimes, C.; Kirsch, J.; Gee, M.S. MRI Techniques to Decrease Imaging Times in Children. RadioGraphics 2020, 40, 485–502. [Google Scholar] [CrossRef]
- Li, X.; Cokkinos, D.; Gadani, S.; Rafailidis, V.; Aschwanden, M.; Levitin, A.; Szaflarski, D.; Kirksey, L.; Staub, D.; Partovi, S. Advanced ultrasound techniques in arterial diseases. Int. J. Cardiovasc. Imaging 2022, 38, 1711–1721. [Google Scholar] [CrossRef]
- Chaudhari, A.S.; Mittra, E.; Davidzon, G.A.; Gulaka, P.; Gandhi, H.; Brown, A.; Zhang, T.; Srinivas, S.; Gong, E.; Zaharchuk, G.; et al. Low-count whole-body PET with deep learning in a multicenter and externally validated study. NPJ Digit. Med. 2021, 4, 127. [Google Scholar] [CrossRef]
- Liang, G.; Zheng, L. A transfer learning method with deep residual network for pediatric pneumonia diagnosis. Comput. Methods Programs Biomed. 2020, 187, 104964. [Google Scholar] [CrossRef]
- Behzadi-Khormouji, H.; Rostami, H.; Salehi, S.; Derakhshande-Rishehri, T.; Masoumi, M.; Salemi, S.; Keshavarz, A.; Gholamrezanezhad, A.; Assadi, M.; Batouli, A. Deep learning, reusable and problem-based architectures for detection of consolidation on chest X-ray images. Comput. Methods Programs Biomed. 2020, 185, 105162. [Google Scholar] [CrossRef]
- Liu, X.; Faes, L.; Kale, A.U.; Wagner, S.K.; Fu, D.J.; Bruynseels, A.; Mahendiran, T.; Moraes, G.; Shamdas, M.; Kern, C.; et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. Lancet Digit. Health 2019, 1, e271–e297. [Google Scholar] [CrossRef] [PubMed]
- Sun, Z.; Ng, C.K.C.; Dos Reis, C.S. Synchrotron radiation computed tomography versus conventional computed tomography for assessment of four types of stent grafts used for endovascular treatment of thoracic and abdominal aortic aneurysms. Quant. Imaging Med. Surg. 2018, 8, 609–620. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Almutairi, A.M.; Sun, Z.; Ng, C.; Al-Safran, Z.A.; Al-Mulla, A.A.; Al-Jamaan, A.I. Optimal scanning protocols of 64-slice CT angiography in coronary artery stents: An in vitro phantom study. Eur. J. Radiol. 2010, 74, 156–160. [Google Scholar] [CrossRef] [Green Version]
- Sun, Z.; Ng, C.K.C. Use of Synchrotron Radiation to Accurately Assess Cross-Sectional Area Reduction of the Aortic Branch Ostia Caused by Suprarenal Stent Wires. J. Endovasc. Ther. 2017, 24, 870–879. [Google Scholar] [CrossRef] [PubMed]
- Harris, M.; Qi, A.; Jeagal, L.; Torabi, N.; Menzies, D.; Korobitsyn, A.; Pai, M.; Nathavitharana, R.R.; Khan, F.A. A systematic review of the diagnostic accuracy of artificial intelligence-based computer programs to analyze chest X-rays for pulmonary tuberculosis. PLoS ONE 2019, 14, e0221339. [Google Scholar] [CrossRef]
- Groen, A.M.; Kraan, R.; Amirkhan, S.F.; Daams, J.G.; Maas, M. A systematic review on the use of explainability in deep learning systems for computer aided diagnosis in radiology: Limited use of explainable AI? Eur. J. Radiol. 2022, 157, 110592. [Google Scholar] [CrossRef] [PubMed]
- Zebari, D.A.; Ibrahim, D.A.; Zeebaree, D.Q.; Haron, H.; Salih, M.S.; Damaševičius, R.; Mohammed, M.A. Systematic Review of Computing Approaches for Breast Cancer Detection Based Computer Aided Diagnosis Using Mammogram Images. Appl. Artif. Intell. 2021, 35, 2157–2203. [Google Scholar] [CrossRef]
Patient/Population | Pediatric patients aged from 0 to 21 years |
Intervention | Use of GAN to accomplish tasks involved in pediatric radiology |
Comparison | GAN versus other approaches to accomplish the same task in pediatric radiology |
Outcome | Performance of task accomplishment |
Inclusion Criteria | Exclusion Criteria |
---|---|
|
|
GAN Application | Best Model Performance |
---|---|
Disease diagnosis | 0.978 accuracy and 0.900 AUC |
Image quality assessment | 0.81 sensitivity, 0.95 specificity, and 0.93 accuracy |
Image reconstruction | 0.991 SSIM, 38.36 dB PSNR, 31.9 SNR and 21.2 CNR |
Image segmentation | 0.929 DSC, 0.338 mm HD, 0.86 Jaccard index, 0.92 sensitivity, 0.998 specificity and NPV, and 0.94 PPV |
Image synthesis and data augmentation for DL-CAD performance enhancement | 0.994 AUC, 0.993 sensitivity, 0.990 PPV, 0.991 F1 score, 0.97 specificity, and 0.990 accuracy |
Image translation | 0.93 SSIM, 0.98 DSC, 32.6 dB PSNR, 42.4 HU MAE, 13.03 mm HD and 0.23 mm MSD |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ng, C.K.C. Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review. Children 2023, 10, 1372. https://doi.org/10.3390/children10081372
Ng CKC. Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review. Children. 2023; 10(8):1372. https://doi.org/10.3390/children10081372
Chicago/Turabian StyleNg, Curtise K. C. 2023. "Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review" Children 10, no. 8: 1372. https://doi.org/10.3390/children10081372
APA StyleNg, C. K. C. (2023). Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review. Children, 10(8), 1372. https://doi.org/10.3390/children10081372