Next Article in Journal
Suppressing the Influence of Ectopic Beats by Applying a Physical Threshold-Based Sample Entropy
Previous Article in Journal
Energy Conservation in Absorption Refrigeration Cycles Using DES as a New Generation of Green Absorbents
Open AccessArticle

Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks

by Likun Cai 1,2,3,*, Yanjie Chen 1,2,3, Ning Cai 1, Wei Cheng 4 and Hao Wang 1
1
School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
2
Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China
3
University of Chinese Academy of Sciences, Beijing 100049, China
4
NEC Laboratories America, Inc. (NEC Labs), NEC Corporation, Princeton, NJ 08540, USA
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(4), 410; https://doi.org/10.3390/e22040410
Received: 25 February 2020 / Revised: 25 March 2020 / Accepted: 31 March 2020 / Published: 4 April 2020
(This article belongs to the Special Issue Deep Generative Models)
Generative Adversarial Nets (GANs) are one of the most popular architectures for image generation, which has achieved significant progress in generating high-resolution, diverse image samples. The normal GANs are supposed to minimize the Kullback–Leibler divergence between distributions of natural and generated images. In this paper, we propose the Alpha-divergence Generative Adversarial Net (Alpha-GAN) which adopts the alpha divergence as the minimization objective function of generators. The alpha divergence can be regarded as a generalization of the Kullback–Leibler divergence, Pearson χ 2 divergence, Hellinger divergence, etc. Our Alpha-GAN employs the power function as the form of adversarial loss for the discriminator with two-order indexes. These hyper-parameters make our model more flexible to trade off between the generated and target distributions. We further give a theoretical analysis of how to select these hyper-parameters to balance the training stability and the quality of generated images. Extensive experiments of Alpha-GAN are performed on SVHN and CelebA datasets, and evaluation results show the stability of Alpha-GAN. The generated samples are also competitive compared with the state-of-the-art approaches. View Full-Text
Keywords: Alpha divergence; generative adversarial network; unsupervised image generation; deep neural networks Alpha divergence; generative adversarial network; unsupervised image generation; deep neural networks
Show Figures

Figure 1

MDPI and ACS Style

Cai, L.; Chen, Y.; Cai, N.; Cheng, W.; Wang, H. Utilizing Amari-Alpha Divergence to Stabilize the Training of Generative Adversarial Networks. Entropy 2020, 22, 410.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop