Next Article in Journal
Is the Free-Energy Principle a Formal Theory of Semantics? From Variational Density Dynamics to Neural and Phenotypic Representations
Next Article in Special Issue
Information Bottleneck Classification in Extremely Distributed Systems
Previous Article in Journal
Time, Irreversibility and Entropy Production in Nonequilibrium Systems
Previous Article in Special Issue
A New Information-Theoretic Method for Advertisement Conversion Rate Prediction for Large-Scale Sparse Data Based on Deep Learning
Article

Data-Dependent Conditional Priors for Unsupervised Learning of Multimodal Data

1
Faculty of Science, Computer Science Department, University of Geneva, 1214 Geneva, Switzerland
2
Geneva School of Business Administration (DMML Group), HES-SO, 1227 Geneva, Switzerland
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in the European Conference on Artificial Intelligence, ECAI2020.
Entropy 2020, 22(8), 888; https://doi.org/10.3390/e22080888
Received: 26 July 2020 / Revised: 8 August 2020 / Accepted: 11 August 2020 / Published: 13 August 2020
One of the major shortcomings of variational autoencoders is the inability to produce generations from the individual modalities of data originating from mixture distributions. This is primarily due to the use of a simple isotropic Gaussian as the prior for the latent code in the ancestral sampling procedure for data generations. In this paper, we propose a novel formulation of variational autoencoders, conditional prior VAE (CP-VAE), with a two-level generative process for the observed data where continuous z and a discrete c variables are introduced in addition to the observed variables x. By learning data-dependent conditional priors, the new variational objective naturally encourages a better match between the posterior and prior conditionals, and the learning of the latent categories encoding the major source of variation of the original data in an unsupervised manner. Through sampling continuous latent code from the data-dependent conditional priors, we are able to generate new samples from the individual mixture components corresponding, to the multimodal structure over the original data. Moreover, we unify and analyse our objective under different independence assumptions for the joint distribution of the continuous and discrete latent variables. We provide an empirical evaluation on one synthetic dataset and three image datasets, FashionMNIST, MNIST, and Omniglot, illustrating the generative performance of our new model comparing to multiple baselines. View Full-Text
Keywords: VAE; generative models; learned prior VAE; generative models; learned prior
Show Figures

Figure 1

MDPI and ACS Style

Lavda, F.; Gregorová, M.; Kalousis, A. Data-Dependent Conditional Priors for Unsupervised Learning of Multimodal Data. Entropy 2020, 22, 888. https://doi.org/10.3390/e22080888

AMA Style

Lavda F, Gregorová M, Kalousis A. Data-Dependent Conditional Priors for Unsupervised Learning of Multimodal Data. Entropy. 2020; 22(8):888. https://doi.org/10.3390/e22080888

Chicago/Turabian Style

Lavda, Frantzeska, Magda Gregorová, and Alexandros Kalousis. 2020. "Data-Dependent Conditional Priors for Unsupervised Learning of Multimodal Data" Entropy 22, no. 8: 888. https://doi.org/10.3390/e22080888

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop