Next Article in Journal
Stability of Multiple Seasonal Holt-Winters Models Applied to Hourly Electricity Demand in Spain
Next Article in Special Issue
Machine Learning Classifiers Evaluation for Automatic Karyogram Generation from G-Banded Metaphase Images
Previous Article in Journal
Investigation of Phase-Locked Loop Statistics via Numerical Implementation of the Fokker–Planck Equation
Previous Article in Special Issue
Using 2D CNN with Taguchi Parametric Optimization for Lung Cancer Recognition from CT Images
Open AccessArticle

Visual and Quantitative Evaluation of Amyloid Brain PET Image Synthesis with Generative Adversarial Network

1
Institute of Convergence Bio-Health, Dong-A University, Busan 602760, Korea
2
Department of Electric Electronic and Communication Engineering, Kyungsung University, Busan 48434, Korea
3
College of General Education, Dong-A University, Busan 602760, Korea
4
Department of Nuclear Medicine, Dong-A University College of Medicine, Busan 602760, Korea
*
Author to whom correspondence should be addressed.
Appl. Sci. 2020, 10(7), 2628; https://doi.org/10.3390/app10072628
Received: 6 March 2020 / Revised: 31 March 2020 / Accepted: 4 April 2020 / Published: 10 April 2020
(This article belongs to the Special Issue Machine Learning in Medical Image Processing)
Conventional data augmentation (DA) techniques, which have been used to improve the performance of predictive models with a lack of balanced training data sets, entail an effort to define the proper repeating operation (e.g., rotation and mirroring) according to the target class distribution. Although DA using generative adversarial network (GAN) has the potential to overcome the disadvantages of conventional DA, there are not enough cases where this technique has been applied to medical images, and in particular, not enough cases where quantitative evaluation was used to determine whether the generated images had enough realism and diversity to be used for DA. In this study, we synthesized 18F-Florbetaben (FBB) images using CGAN. The generated images were evaluated using various measures, and we presented the state of the images and the similarity value of quantitative measurement that can be expected to successfully augment data from generated images for DA. The method includes (1) conditional WGAN-GP to learn the axial image distribution extracted from pre-processed 3D FBB images, (2) pre-trained DenseNet121 and model-agnostic metrics for visual and quantitative measurements of generated image distribution, and (3) a machine learning model for observing improvement in generalization performance by generated dataset. The Visual Turing test showed similarity in the descriptions of typical patterns of amyloid deposition for each of the generated images. However, differences in similarity and classification performance per axial level were observed, which did not agree with the visual evaluation. Experimental results demonstrated that quantitative measurements were able to detect the similarity between two distributions and observe mode collapse better than the Visual Turing test and t-SNE. View Full-Text
Keywords: Alzheimer’s disease; deep learning; data augmentation; generative adversarial network; positron emission tomography Alzheimer’s disease; deep learning; data augmentation; generative adversarial network; positron emission tomography
Show Figures

Figure 1

MDPI and ACS Style

Kang, H.; Park, J.-S.; Cho, K.; Kang, D.-Y. Visual and Quantitative Evaluation of Amyloid Brain PET Image Synthesis with Generative Adversarial Network. Appl. Sci. 2020, 10, 2628.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop